Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian.

Similar presentations


Presentation on theme: "Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian."— Presentation transcript:

1 Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian networks we’ve seen so far describe static situations –Each random variable gets a single fixed value in a single problem instance Now we consider the problem of describing probabilistic environments that evolve over time –Examples: robot localization, tracking, speech, …

2 Hidden Markov Models At each time slice t, the state of the world is described by an unobservable variable X t and an observable evidence variable E t Transition model: distribution over the current state given the whole past history: P(X t | X 0, …, X t-1 ) = P(X t | X 0:t-1 ) Observation model: P(E t | X 0:t, E 1:t-1 ) X0X0 E1E1 X1X1 E t-1 X t-1 EtEt XtXt … E2E2 X2X2

3 Hidden Markov Models Markov assumption (first order) –The current state is conditionally independent of all the other states given the state in the previous time step –What does P(X t | X 0:t-1 ) simplify to? P(X t | X 0:t-1 ) = P(X t | X t-1 ) Markov assumption for observations –The evidence at time t depends only on the state at time t –What does P(E t | X 0:t, E 1:t-1 ) simplify to? P(E t | X 0:t, E 1:t-1 ) = P(E t | X t ) X0X0 E1E1 X1X1 E t-1 X t-1 EtEt XtXt … E2E2 X2X2

4 state evidence Example

5 state evidence Example Transition model Observation model

6 An alternative visualization U R R t = TR t = F R t-1 = T0.70.3 R t-1 = F0.30.7 U t = TU t = F R t = T0.90.1 R t = F0.20.8 Transition model Observation model

7 Another example States: X = {home, office, cafe} Observations: E = {sms, facebook, email} Slide credit: Andy White

8 The Joint Distribution Transition model: P(X t | X 0:t-1 ) = P(X t | X t-1 ) Observation model: P(E t | X 0:t, E 1:t-1 ) = P(E t | X t ) How do we compute the full joint P(X 0:t, E 1:t )? X0X0 E1E1 X1X1 E t-1 X t-1 EtEt XtXt … E2E2 X2X2

9 HMM Learning and Inference Inference tasks –Filtering: what is the distribution over the current state X t given all the evidence so far, e 1:t –Smoothing: what is the distribution of some state X k given the entire observation sequence e 1:t ? –Evaluation: compute the probability of a given observation sequence e 1:t –Decoding: what is the most likely state sequence X 0:t given the observation sequence e 1:t ? Learning –Given a training sample of sequences, learn the model parameters (transition and emission probabilities)


Download ppt "Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian."

Similar presentations


Ads by Google