Particle Filters for Localization & Abnormality Detection Dan Bryce Markoviana Reading Group Saturday, May 16, 2015.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Mobile Robot Localization and Mapping using the Kalman Filter
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
Dynamic Bayesian Networks (DBNs)
Lirong Xia Hidden Markov Models Tue, March 28, 2014.
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Advanced Artificial Intelligence
10/28 Temporal Probabilistic Models. Temporal (Sequential) Process A temporal process is the evolution of system state over time Often the system state.
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Graphical Models for Mobile Robot Localization Shuang Wu.
Monte Carlo Localization
Expectation Maximization Algorithm
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Today Introduction to MCMC Particle filters and MCMC
11/14  Continuation of Time & Change in Probabilistic Reasoning Project 4 progress? Grade Anxiety? Make-up Class  On Monday?  On Wednesday?
. Approximate Inference Slides by Nir Friedman. When can we hope to approximate? Two situations: u Highly stochastic distributions “Far” evidence is discarded.
Announcements Homework 8 is out Final Contest (Optional)
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
10/11/2001CS 638, Fall 2001 Today Kd-trees BSP Trees.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Particle Filters++ TexPoint fonts used in EMF.
Bayesian Filtering for Robot Localization
Markov Localization & Bayes Filtering
QUIZ!!  T/F: The forward algorithm is really variable elimination, over time. TRUE  T/F: Particle Filtering is really sampling, over time. TRUE  T/F:
Computer vision: models, learning and inference Chapter 19 Temporal models.
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Learning and Inferring Transportation Routines By: Lin Liao, Dieter Fox and Henry Kautz Best Paper award AAAI’04.
Continuous Variables Write message update equation as an expectation: Proposal distribution W t (x t ) for each node Samples define a random discretization.
Randomized Algorithms for Bayesian Hierarchical Clustering
Mobile Robot Localization (ch. 7)
CS Statistical Machine learning Lecture 24
Robotics Club: 5:30 this evening
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
CSE-473 Project 2 Monte Carlo Localization. Localization as state estimation.
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models.
OBJECT TRACKING USING PARTICLE FILTERS. Table of Contents Tracking Tracking Tracking as a probabilistic inference problem Tracking as a probabilistic.
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Bayesian Hierarchical Clustering Paper by K. Heller and Z. Ghahramani ICML 2005 Presented by David Williams Paper Discussion Group ( )
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
CS 541: Artificial Intelligence Lecture VIII: Temporal Probability Models.
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Probabilistic Robotics
Markov ó Kalman Filter Localization
Introduction to particle filter
Non-parametric Filters
Inference Inference: calculating some useful quantity from a joint probability distribution Examples: Posterior probability: Most likely explanation: B.
Introduction to particle filter
Day 33 Range Sensor Models 12/10/2018.
Probabilistic Map Based Localization
Hidden Markov Models Markov chains not so useful for most agents
Non-parametric Filters
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Presentation transcript:

Particle Filters for Localization & Abnormality Detection Dan Bryce Markoviana Reading Group Saturday, May 16, 2015

Problems Position Tracking Global Localization Kidnapped Robot (Failure Recovery) Multi Robot Localization

Solution Technique MCL – Monte Carlo Localization –Particle Filter + Model of Sensors + Model of Effectors Need: –Bel(x 0 ) : Prior or initial belief --- x is position, heading –p(x t |x t-1,u t-1 ): effector model, u is control (e.g. turn, drive) –p(y t |x t ): sensor model, y is sensor’s estimate of distance (e.g. laser range finder) –q: sampling distribution

p(x t |x t-1,u t-1 ) Effector model is the convolution of 3 distributions –Robot kinematics model –Rotation noise –Translation Noise

Digression: the “action” input Notice that the transition function now is p(x t |x t-1,u t-1 )—it is dependent both on the previous state x t-1 and the action done- u t-1 The actions are presumably chosen by the robot during a policy computation phase –If we are _tracking_ the robot (as against robot localizing itself), then we won’t have access to U t-1 input.. So, our filtering problem is really a “plan monitoring problem” This brings up interesting questions about how to combine planning and filtering/monitoring. For example, if the Robot loses its bearings, it may want to shift from the policy it is executing to a “find-your-bearings” policy (such as try going to the nearest wall)

p(y t |x t ) Convolution of two distributions (Fig 19.2) –Ideal noise free model of laser range finder –Mixture of noise variables P(correct reading) convolved with Gaussian noise P(max reading) Random noise following exponential distribution

p(y t |x t )

Menkes’ question Menkes asked why the distribution of particles in the top figure of 19.4 does not look as if particles are uniformly distributed. One explanation is that even if the robot starts with a uniform prior, within one iteration (of transitioning the particles through transfer function, and weighting/resampling w.r.t sensor model, they may already start to die out in regions that are completely inconsistent with sensor readings

q q = p(x t |x t-1, u t-1 )Bel(x t-1 ) (19.2.4) –Sampling from effector model and prior –Good if sensors are bad, bad if sensors are good Approximates –True posterior that reflects sensors Assigned weights are proportional to p(y t |x t ) (19.2.6) –Adjust sample to reflect sensor model

q’ Previous q assumed sensors were very noisy and gave them less influence on samples (19.3.7) samples from p(y t |x t ) –Rely more on senors and less on prior belief and last control Several variations on setting weights …

Digression: getting particles at the right place There are two issues with the particles—getting them in the right place (e.g. right robot pose), and getting their weights. If all the particles are in the wrong parts of the pose space (all of which are inconsistent with the current sensor information), they all will get 0 weights and become all useless If at least some particles are in the right pose-space, they can get non-zero weight and prosper (as happens in the MCL-random- particle idea) The best would be to make all the particles be in the part of the space where the posterior probability is high –Since we don’t know the posterior distribution, we have to either sample them from the prior and transition function –OR sample from the sensor model (i.e. find the poses that have high P(y|x) value for the y value that the sensors are giving us.

Variation 1 For each sample, draw a pair of samples, one from prior, one from sensor model (19.3.9) Importance factor is proportional to (x i t |u t-1,x i t-1 ) No need to transform sample sets into densities Importance factor may be zero for many poses

Kd-trees A kd-tree is a tree with the following properties –Each node represents a rectilinear region (faces aligned with axes) –Each node is associated with an axis aligned plane that cuts its region into two, and it has a child for each sub-region –The directions of the cutting planes alternate with depth First cut perpendicular to x-axis, then perpendicular to y, then z and then back to x (some variations apparently ignore this rule) Kd-trees generalize octrees by allowing splitting planes at variable positions –Note that cut planes in different sub-trees at the same level do not have to be the same

Kd-tree Example

Another kd-tree

Variation 2 Forward Sampling + kd-trees (Fig ) to approximate q’ = p(x t |d 0…t-1, u t-1 )kd-trees –Sample a set from Bel(x j t-1 ), then p(x j t |u t-1,x j t-1 ) to get p(x j t |d 0…t-1, u t-1 ) –Turn set into kd-tree (generalize poses) –Weight is proportional to x i t probability in kd- tree Weight is proportional to p(x i t |d 0…t-1, u t-1 )

Variation 3 Best of 1 and 3: large weights, no forward sampling –Turn Bel(x t-1 ) into kd-tree. –Sample x i t from P(y t |x t ) –Sample x i t-1 from P(x i t |u t-1, x t-1 ) –Set weight to the probability of x i t-1 in kd-tree Weight is proportional to  (x i t |u t-1 )Bel(x i t-1 )

Mixture proposal q: fails if sensors are accurate q’: fails when sensors are not accurate When in doubt use  –(1-  )q +  q’ –They used  = 0.1, and second method to compute weight for q’ (Fig 19.11) 1000 particles 50 particles

Kidnapped Robot MCL > MCL w/noise > Mixture MCL

Learning and Inferring Transportation Routines Engineering feat including many technologies –Abstract Hierarchical Markov Model Represented as DBN Inference with Rao-Blackwellised particle filter –EM –Error/Abnormality detection –Plan Recognition

H-HMM

RB-Filter Non-Gaussian Gaussian Evidence

Tracking Posterior Sample –Sample high level discrete variables to get: –Then, predict gaussian filter of position with:

Tracking (cont’d) –Correction:

Goal and segment estimates Do filtering as previously described (for nodes below line in graph), but allow lower nodes to be conditioned on upper nodes For each filtered particle use exact inference to update g i k and t i k

Learning Learn Structural (Flat) Model –Goals: EM for edge duration, then cluster –transfer points – count transitions in F/B pass Learn Hierarchical transition probabilities –between goals, segments given goal, streets given segment

Detecting Errors Can’t add all possible unknown goals and transfers to model Instead use two trackers (combined with Bayes Factors) –First uses hierarchical model (fewer, costly particles) –Second uses flat model (more, cheap particles)

Comparison to Flat and 2MM

Qns.. In what sense is an AHMM a HMM rather than just a general DBN? The issue of converting the agent plans into a compact AHMM—seems to make assumptions about the compactness of the set of behaviors the agent is likely to exhibit. –It seems as if the plan recognition problem at hand would have been quite easy in deterministic domains (since the agent is involved in only a few plans at most)

Combining plan synthesis and plan monitoring We discussed about the possibility of hooking up the MCL style plan monitoring engine to a plan synthesis algorithm Dan/Will said their simulator for Chitta’s class last year was a non-deterministic simulator (which doesn’t take likelihoods into account). –Would be nice to extend it It would then be nice to interleave monitoring and replanning (when monitoring says that things are seriously afoul).