Presentation is loading. Please wait.

Presentation is loading. Please wait.

-Arnaud Doucet, Nando de Freitas et al, UAI 2000-.

Similar presentations


Presentation on theme: "-Arnaud Doucet, Nando de Freitas et al, UAI 2000-."— Presentation transcript:

1 -Arnaud Doucet, Nando de Freitas et al, UAI 2000-

2 Introduction Problem Formulation Importance Sampling and Rao- Blackwellisation Rao-Blackwellisation Particle Filter Example Conclusion

3 Famous state estimaton algorithm, The Kalman filter and the HMM filter, are only applicable to linear-Gaussian models and if state space is so large, the computatuion cost becomes too expensive. Sequential Monte Carlo methods(Particle Filtering) have been introduced (Handschine and Mayne,1969) to handle large state model.

4 Particle Filtering(PF) = “ condensation ” = “ sequential Monte Carlo ” = “ survival of the fittest ”  PF can treat any type of probability distribution,nonlinearity and non- stationarity.  PF are powerful sampling based inference/learning algorithms for DBNs

5 Drawback of PF  Inefficent in high-dimensional spaces (Variance becomes so large) Solution  Rao-Balckwellisation, that is, sample a subset of the variables allowing the remainder to be integrated out exactly. The resulting estimates can be shown to have lower variance. Rao-Blackwell Theorem

6 Model : general state space model/DBN with hidden variables and observed variables Objective:  or filtering density  To solve this problem,one need approximation schemes because of intractable integrals

7 Additive assumption in this paper:  Divide hidden variables into two groups,  Conditional posterior distribution is analytically tractable  We only need to focus on estimating Which lies in a space of reduced dimension

8 Monte Carlo integration

9  But it ’ s impossible to sample efficiently from the “ target ” posterior distribution. Importance Sampling Method (Alternative way) Weight function Importance function

10 Point mass approximation Normalized Importance weight

11

12 In case, we can marginalize out analytically

13

14 We can estimate with a reduced variance

15

16 Sequential Importance Sampling  Restrict importance function  We can obtain recursive formulas and obtain “ incremental weight ” is given by

17 Choice of importance Distribution  Simplest choice is to just sample from the prior, => it can be inefficent, since it ignores the most recent evidence,.  “ optimal ” importance distribution :Minimizing the variance of the importance weight.

18 But it is often too expensive.Several Deterministic approximations to the optimal distribution have been proposed, see for example(de Freitas 1999,Doucet 1998) Selection step  Using Resampling : elimate samples with low importance weight and multiply samples with high importance weight. ( ex: residual sampling, stratified sampling, multinomial sampling)

19 Goal : It is paossible to simulate and to compute coefficent analytically using Kalman filters. This is because the output of the neural network is linear in Number of basis function

20

21 Successful application  Conditionaliiy linear Gaussian state-space models  Conditionally finite state-space HMMs Possible extensions  Dynamic models for counting observations  Dynamic models with a time-varying unknown covariance matrix for the dynamic noise  Calsses of the exponential family state space models etc..


Download ppt "-Arnaud Doucet, Nando de Freitas et al, UAI 2000-."

Similar presentations


Ads by Google