Presentation is loading. Please wait.

Presentation is loading. Please wait.

HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.

Similar presentations


Presentation on theme: "HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia."— Presentation transcript:

1

2 HMM: Particle filters Lirong Xia

3 Recap: Reasoning over Time
Markov models p(X1) p(X|X-1) p(E|X) Hidden Markov models X E p rain umbrella 0.9 no umbrella 0.1 sun 0.2 0.8 Filtering: Given time t and evidences e1,…,et, compute p(Xt|e1:t)

4 Filtering algorithm Notation
B(Xt-1)=p(Xt-1|e1:t-1) B’(Xt)=p(Xt|e1:t-1) Each time step, we start with p(Xt-1 | previous evidence): Elapse of time B’(Xt)=Σxt-1p(Xt|xt-1)B(xt-1) Observe B(Xt) ∝p(et|Xt)B’(Xt) Renormalize B(Xt)

5 Today Particle filtering Viterbi algorithm

6 Particle Filtering Sometimes |X| is too big to use exact inference
|X| may be too big to even store B(X) E.g. X is continuous Solution: approximate inference Track samples of X, not all values Samples are called particles Time per step is linear in the number of samples But: number needed may be large In memory: list of particles This is how robot localization works in practice

7 Representation: Particles
p(X) is now represented by a list of N particles (samples) Generally, N << |X| Storing map from X to counts would defeat the point p(x) approximated by number of particles with value x Many x will have p(x)=0 More particles, more accuracy For now, all particles have a weight of 1 Particles: (3,3) (2,3) (3,2) (2,1)

8 Particle Filtering: Elapse Time
Each particle is moved by sampling its next position from the transition model x’= sample(p(X’|x)) Samples’ frequencies reflect the transition probabilities This captures the passage of time If we have enough samples, close to the exact values before and after (consistent)

9 Particle Filtering: Observe
Slightly trickier: Likelihood weighting Note that, as before, the probabilities don’t sum to one, since most have been downweighted

10 Particle Filtering: Resample
Old Particles: (3,3) w=0.1 (2,1) w=0.9 (3,1) w=0.4 (3,2) w=0.3 (2,2) w=0.4 (1,1) w=0.4 New Particles: (2,1) w=1 (3,2) w=1 (2,2) w=1 (1,1) w=1 (3,1) w=1 Rather than tracking weighted samples, we resample N times, we choose from our weighted sample distribution (i.e. draw with replacement) This is analogous to renormalizing the distribution Now the update is complete for this time step, continue with the next one

11 Forward algorithm vs. particle filtering
Elapse of time B’(Xt)=Σxt-1p(Xt|xt-1)B(xt-1) Observe B(Xt) ∝p(et|Xt)B’(Xt) Renormalize B(xt) sum up to 1 Elapse of time x--->x’ Observe w(x’)=p(et|x) Resample resample N particles

12 Robot Localization In robot localization:
We know the map, but not the robot’s position Observations may be vectors of range finder readings State space and readings are typically continuous (works basically like a very fine grid) and so we cannot store B(X) Particle filtering is a main technique

13 SLAM SLAM = Simultaneous Localization And Mapping
We do not know the map or our location Our belief state is over maps and positions! Main techniques: Kalman filtering (Gaussian HMMs) particle methods

14 HMMs: MLE Queries HMMs defined by: Query: most likely explanation:
States X Observations E Initial distribution: p(X1) Transitions: p(X|X-1) Emissions: p(E|X) Query: most likely explanation:

15 State Path X1 X2 … XN Graph of states and transitions over time
Each arc represents some transition xt-1→xt Each arc has weight p(xt|xt-1)p(et|xt) Each path is a sequence of states The product of weights on a path is the seq’s probability Forward algorithm computing the sum of all paths Viterbi algorithm computing the best paths X X … XN

16 Viterbi Algorithm

17 Example X E p +r +u 0.9 -u 0.1 -r 0.2 0.8


Download ppt "HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia."

Similar presentations


Ads by Google