Presentation is loading. Please wait.

Presentation is loading. Please wait.

Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models.

Similar presentations


Presentation on theme: "Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models."— Presentation transcript:

1 Particle Filtering

2 Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models and sensor models to estimate ground truth, unobserved variables, make forecasts, fuse multiple sensors

3 Hidden Markov Model Use observations leading up to time t to get an idea of where the robot is at time t X i is robot state at time i (unknown) z i is the observation at time i (z 1,…,z t are known) X0X0 X1X1 X2X2 X3X3 z1z1 z2z2 z3z3 Hidden state variables Observed variables Predict – observe – predict – observe… Problem-specific probability distributions

4 Last Class Kalman Filtering and its extensions Exact Bayesian inference for Gaussian state distributions, process noise, observation noise What about more general distributions? Key representational issue How to represent and perform calculations on probability distributions?

5 State estimation: block form State estimator Dynamics model Sensor model Observations: z 1,…,z t Estimated state x t (or its distribution P(x t |z 1,…,z t )) Callbacks Data Output

6 Recursive state estimation: block form Recursive state estimator Dynamics model Sensor model Latest observation z t Estimated state distribution P(x t |z 1,…,z t ) Callbacks Data Output Use the output at time t to calculate the estimate for time t+1

7 Particle Filtering (aka Sequential Monte Carlo) Represent distributions as a set of particles Applicable to non-gaussian high-D distributions Convenient implementations Widely used in vision, robotics

8 Simultaneous Localization and Mapping (SLAM) Mobile robots Odometry Locally accurate Drifts significantly over time Vision/ladar/sonar Inaccurate locally Global reference frame Combine the two State: (robot pose, map) Observations: (sensor input)

9 General problem x t ~ Bel(x t ) (arbitrary p.d.f.) x t+1 = f(x t,u,  p ) z t+1 = g(x t+1,  o )  p ~ arbitrary p.d.f.,  o ~ arbitrary p.d.f. Process noise Observation noise [Dynamics model] [Sensor model]

10 Example: sensor fusion Two sensors estimate an underlying process, e.g., pose estimation State: position + orientation of frame Sensor A: Inertial measurement unit Sensor B: visual odometry, feature tracking Sensor A is more noisy on average but is consistent Sensor B is accurate most of the time but has occasional failures State X t : true pose at time t Observation z t : (T A t,T B t, n B t ) with n B t = # of feature matches in visual odometry between frames t and t-1

11 Example: sensor fusion (ignore rotation) Dynamic model x t+1 = x t +  p Random walk model: assume x-y components of  p are distributed according to a normal distribution N(0,  horiz ), vertical component N(0,  vert ) Sensor model: assume “good reading” and “bad reading” levels for sensor B T A t = x t +  oA (T B t, n B t ) = (x B t +  oB(good), n B (good) ) with probability P(good) (T B t, n B t ) = (x B t +  oB(bad), n B (bad) ) with probability 1-P(good) Good n B(good) nBtnBt n B(bad) TBtTBt xtxt TAtTAt  oA  oB(good)  oB(bad)

12 Particle Representation Bel(x t ) = {(w k,x k ), k=1,…,n} w k are weights, x k are state hypotheses Weights sum to 1 Approximates the underlying distribution

13 Monte Carlo Integration If P(x) ≈ Bel(x) = {(w k,x k ), k=1,…,N} E P [f(x)] = integral[ f(x)P(x)dx ] ≈  k w k f(x k ) What might you want to compute? Mean: set f(x) = x Variance: f(x) = x 2 (recover Var(x) = E[x 2 ]-E[x] 2 ) P(y): set f(x) = P(y|x) Because P(y) = integral[ P(y|x)P(x)dx ]

14 Filtering Steps Predict Compute Bel’(x t+1 ): distribution of x t+1 using dynamics model alone Update Compute a representation of P(x t+1 |z t+1 ) via likelihood weighting for each particle in Bel’(x t+1 ) Resample to produce Bel(x t+1 ) for next step

15 Predict Step Given input particles Bel(x t ) Distribution of x t+1 =f(x t,u t,  ) determined by sampling  from its distribution and then propagating individual particles Gives Bel’(x t+1 )

16 Particle Propagation

17 Update Step Goal: compute a representation of P(x t+1 | z t+1 ) given Bel’(x t+1 ), z t+1 P(x t+1 | z t+1 ) =  P(z t+1 | x t+1 ) P(x t+1 ) P(x t+1 ) = Bel’(x t+1 ) (given) Each state hypothesis x k  Bel’(x t+1 ) is reweighted by P(z t+1 | x t+1 ) Likelihood weighting: w k  w k P(z t+1 |x t+1 =x k ) Then renormalize to 1

18 Update Step w k  w k ’ * P(z t+1 | x t+1 =x k ) 1D example: g(x,  o ) = h(x) +  o  o ~ N( ,  ) P(z t+1 | x t+1 =x k ) = C exp(- (h(x k )-z t+1 ) 2 / 2  2 ) In general, distribution can be calibrated using experimental data

19 Resampling Likelihood weighted particles may no longer represent the distribution efficiently Importance resampling: sample new particles proportionally to weight

20 Sampling Importance Resampling (SIR) variant Predict Update Resample

21 Particle Filtering Issues Variance Std. dev. of a quantity (e.g., mean) computed as a function of the particle representation ~ 1/sqrt(N) Loss of particle diversity Resampling will likely drop particles with low likelihood They may turn out to be useful hypotheses in the future

22 Other Resampling Variants Selective resampling Keep weights, only resample when # of “effective particles” < threshold Stratified resampling Reduce variance using quasi-random sampling Optimization Explicitly choose particles to minimize deviance from posterior …

23 Storing more information with same # of particles Unscented Particle Filter Each particle represents a local gaussian, maintains a local covariance matrix Combination of particle filter + Kalman filter Rao-Blackwellized Particle Filter State (x 1,x 2 ) Particle contains hypothesis of x 1, analytical distribution over x 2 Reduces variance

24 Advanced Filtering Topics Mixing exact and approximate representations (e.g., mixture models) Multiple hypothesis tracking (assignment problem) Model calibration Scaling up (e.g., 3D SLAM, huge maps)

25 Recap Bayesian mechanisms for state estimation are well understood Representation challenge Methods: Kalman filters: highly efficient closed-form solution for Gaussian distributions Particle filters: approximate filtering for high-D, non- Gaussian distributions Implementation challenges for different domains (localization, mapping, SLAM, tracking)

26 Project presentations 5 minutes each Suggested format Slide 1: Project title, team members Slide 2: Motivation, problem overview Slide 3: Demonstration scenario walkthrough. Include figures. Slide 4: Breakdown of system components, team member roles. System block diagram. Slide 5: Integration plan. Identify potential issues to monitor Integration plan 3-4 milestones (1-2 weeks each) Make sure you still have a viable proof-of-concept demo if you cannot complete final milestone Take feedback into account in 2 page project proposal doc. Due 3/26, but you should start on work plan ASAP.

27

28 Recovering the Distribution Kernel density estimation P(x) =  k w k K(x,x k ) K(x,x k ) is the kernel function Better approximation as # particles, kernel sharpness increases


Download ppt "Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models."

Similar presentations


Ads by Google