Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dynamic Bayesian Networks and Particle Filtering COMPSCI 276 (chapter 15, Russel and Norvig) 2007.

Similar presentations


Presentation on theme: "Dynamic Bayesian Networks and Particle Filtering COMPSCI 276 (chapter 15, Russel and Norvig) 2007."— Presentation transcript:

1 Dynamic Bayesian Networks and Particle Filtering COMPSCI 276 (chapter 15, Russel and Norvig) 2007

2

3

4

5

6

7 Dynamic Belief Networks (DBNs) Bayesian Network at time t Bayesian Network at time t+1 Transition arcs XtXt X t+1 YtYt Y t+1 X0X0 X1X1 X2X2 Y0Y0 Y1Y1 Y2Y2 Unrolled DBN for t=0 to t=10 X 10 Y 10

8

9 Dynamic Belief Networks (DBNs) Two-stage influence diagram Interaction graph

10 Notation X t – value of X at time t X 0:t ={X 0,X 1,…,X t }– vector of values of X Y t – evidence at time t Y 0:t = {Y 0,Y 1,…,Y t } X0X0 X1X1 X2X2 Y0Y0 Y1Y1 Y2Y2 DBN t=0 t=1t=2 XtXt X t+1 YtYt Y t+1 t=1t=2 2-time slice

11 Inference is hard, need approximation Mini-bucket? Sampling?

12

13

14 Same Queries Compute P(X 0:t |Y 0:t ) or P(X t |Y 0:t ) –Example P(X 0:10 |Y 0:10 ) or P(X 10 |Y 0:10 ) –Filtering –Prediction –Smoothing –MPE Hard!!! over a long time period Approximate! Sample!

15 Particle Filtering (PF) = “ condensation ” = “ sequential Monte Carlo ” = “ survival of the fittest ” –PF can treat any type of probability distribution, non-linearity, and non-stationarity; –PF are powerful sampling based inference/learning algorithms for DBNs.

16

17 Particle Filtering

18

19

20

21

22 Example Particle t ={a t,b t,c t }

23 PF Sampling Particle (t) ={a t,b t,c t } Compute particle (t+1): Sample b t+1, from P(b|a t,c t ) Sample a t+1, from P(a|b t+1,c t ) Sample c t+1, from P(c|b t+1,a t+1 ) Weight particle w t+1 If weight is too small, discard Otherwise, multiply

24 Drawback of PF –Inefficient in high-dimensional spaces (Variance becomes so large) Solution –Rao-Balckwellisation, that is, sample a subset of the variables allowing the remainder to be integrated out exactly. The resulting estimates can be shown to have lower variance. Rao-Blackwell Theorem Drawback of PF

25 Problem Formulation Model : general state space model/DBN with hidden variables z t and observed variables y t Objective: –or filtering density –To solve this problem,one need approximation schemes because of intractable integrals

26 Assume conditional posterior distribution p(x 0:t | y 1:t,r 0:t,) is analytically tractable We only need to focus on estimating p(r 0:t | y 1:t ), which lies in a space of reduced dimension: Rao-Blackwellised PF Divide hidden variables into two groups: r t and x t

27 Example Sample Only B t

28 Importance Sampling and Rao- Blackwellisation Monte Carlo integration

29

30

31

32

33

34

35

36

37

38

39


Download ppt "Dynamic Bayesian Networks and Particle Filtering COMPSCI 276 (chapter 15, Russel and Norvig) 2007."

Similar presentations


Ads by Google