4. Particle Filtering For DBLOG PF, regular BLOG inference in each particle Open-Universe State Estimation with DBLOG Rodrigo de Salvo Braz*, Erik Sudderth,

Slides:



Advertisements
Similar presentations
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
Advertisements

Dynamic Bayesian Networks (DBNs)
Gibbs sampling in open-universe stochastic languages Nimar S. Arora Rodrigo de Salvo Braz Erik Sudderth Stuart Russell.
Automatic Inference in BLOG Nimar S. Arora University of California, Berkeley Stuart Russell University of California, Berkeley Erik Sudderth Brown University.
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Rao-Blackwellised Particle Filtering Based on Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks by Arnaud Doucet, Nando de Freitas, Kevin.
Visual Tracking CMPUT 615 Nilanjan Ray. What is Visual Tracking Following objects through image sequences or videos Sometimes we need to track a single.
Introduction of Probabilistic Reasoning and Bayesian Networks
1 Vertically Integrated Seismic Analysis Stuart Russell Computer Science Division, UC Berkeley Nimar Arora, Erik Sudderth, Nick Hay.
PHD Approach for Multi-target Tracking
Overview Full Bayesian Learning MAP learning
10/28 Temporal Probabilistic Models. Temporal (Sequential) Process A temporal process is the evolution of system state over time Often the system state.
Hidden Markov Model Special case of Dynamic Bayesian network Single (hidden) state variable Single (observed) observation variable Transition probability.
Bayes Nets Rong Jin. Hidden Markov Model  Inferring from observations (o i ) to hidden variables (q i )  This is a general framework for representing.
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
Neural Networks Marco Loog.
Bayesian Networks. Graphical Models Bayesian networks Conditional random fields etc.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
1 Validation and Verification of Simulation Models.
11/14  Continuation of Time & Change in Probabilistic Reasoning Project 4 progress? Grade Anxiety? Make-up Class  On Monday?  On Wednesday?
. Approximate Inference Slides by Nir Friedman. When can we hope to approximate? Two situations: u Highly stochastic distributions “Far” evidence is discarded.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
. PGM 2002/3 – Tirgul6 Approximate Inference: Sampling.
. Expressive Graphical Models in Variational Approximations: Chain-Graphs and Hidden Variables Tal El-Hay & Nir Friedman School of Computer Science & Engineering.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Computer vision: models, learning and inference
Genetic Regulatory Network Inference Russell Schwartz Department of Biological Sciences Carnegie Mellon University.
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Dynamic Bayesian Networks and Particle Filtering COMPSCI 276 (chapter 15, Russel and Norvig) 2007.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Tracking Multiple Cells By Correspondence Resolution In A Sequential Bayesian Framework Nilanjan Ray Gang Dong Scott T. Acton C.L. Brown Department of.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
BLOG: Probabilistic Models with Unknown Objects Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L. Ong, Andrey Kolobov University of.
Hidden Markovian Model. Some Definitions Finite automation is defined by a set of states, and a set of transitions between states that are taken based.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
1 Chapter 15 Probabilistic Reasoning over Time. 2 Outline Time and UncertaintyTime and Uncertainty Inference: Filtering, Prediction, SmoothingInference:
Learning and Acting with Bayes Nets Chapter 20.. Page 2 === A Network and a Training Data.
CPSC 422, Lecture 17Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 17 Oct, 19, 2015 Slide Sources D. Koller, Stanford CS - Probabilistic.
Tracking with dynamics
First-Order Probabilistic Inference Rodrigo de Salvo Braz.
Uncertain Observation Times Shaunak Chatterjee & Stuart Russell Computer Science Division University of California, Berkeley.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
TEMPLATE DESIGN © Approximate Inference Completing the analogy… Inferring Seismic Event Locations We start out with the.
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks Arnaud Doucet, Nando de Freitas, Kevin Murphy and Stuart Russell CS497EA presentation.
First-Order Probabilistic Inference Rodrigo de Salvo Braz SRI International.
Conditional Independence As with absolute independence, the equivalent forms of X and Y being conditionally independent given Z can also be used: P(X|Y,
CS498-EA Reasoning in AI Lecture #23 Instructor: Eyal Amir Fall Semester 2011.
The Gradient Descent Algorithm
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 12
Inference in Bayesian Networks
Today.
Probabilistic Reasoning Over Time
Introduction to particle filter
CAP 5636 – Advanced Artificial Intelligence
Bayesian Models in Machine Learning
Introduction to particle filter
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 12
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
CS 188: Artificial Intelligence Fall 2008
LECTURE 09: BAYESIAN LEARNING
Chapter14-cont..
Automatic Inference in PyBLOG
Approximate Inference by Sampling
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 12
Presentation transcript:

4. Particle Filtering For DBLOG PF, regular BLOG inference in each particle Open-Universe State Estimation with DBLOG Rodrigo de Salvo Braz*, Erik Sudderth, Nimar Arora and Stuart Russell Research supported by Defense Advanced Research Projects Agency (DARPA) through the Department of the Interior, NBC, Acquisition Services Division, under Contract No. NBCHD What is DBLOG? DBLOG stands for Dynamic BLOG (Bayesian LOGic). It is the counterpart of Dynamic Bayesian Networks in the BLOG framework. As such, it is a specialization of BLOG algorithms and representation targetted at temporal processes. A specialization is necessary because the temporal structure can be exploited for more efficient inference, and the general BLOG algorithms do not know about it. 2. What changes from BLOG to DBLOG? BLOG is very expressive and can already represent temporal models easily. Representation-wise, DBLOG only needs to know which of the predicates in a BLOG model are to be taken as temporal indices. This is done by using a special type, Timestep (see example). Timestep behaves like natural numbers (even regular BLOG can process it), but DBLOG uses its temporal interpretation for efficiency. 3. DBLOG (and also BLOG) model example #Aircraft ~ Poisson[3]; random Real Position(Aircraft a, Timestep t) if t then ~ UniformReal[-10, 10]() else ~ Gaussian(Position(a, Prev(t)), 2); // num of blips from aircraft a is 0 or 1 #Blip(Source = Aircraft a, Time = Timestep t) ~ TabularCPD[[0.2, 0.8]]() // num false alarms has Poisson distribution. #Blip(Time = Timestep t) ~ Poisson[2]; random Real ApparentPos(Blip b, Timestep t) ~ Gaussian(Position(Source(b), Prev(t)), 2)); evidence: #Aircraft = 2 obs {ApparentPos(b) for Blip b} = {1.2, 3.1, 4.0}; {ApparentPos(b) for Blip b : Time(Source(b)) ApparentPos(b1) Lazy instantiation up to parentless nodes * ApparentPos(b2) (false alarm blips are b2 and b3) ApparentPos(b3) (previous timesteps have their own variables previously sampled) evidence at t State samples for t - 1 Samples weighed by evidence resampling State samples for t Likelihood weighting DBLOG vs. Data Association methods Data Association (DA) methods (e.g. Sittler '64) are ad hoc solutions to the data association problem, the problem of tracking unobservable sources (in our example, aircraft) from observations (blips). Instantiation of all hypothesized sources DA methods typically do not keep instantiated information about previously unobserved sources (in our example, aircraft). Instead, they instantiate a new source if an observation does not fit previously observed sources. Direct application of DBLOG, on the other hand, will instantiate all hypothesized sources (for the specific example presented). The key to this disparity is that ad hoc methods effectively replace the original model by one with a variable for “the number of sources not previously observed”. Alas, this solution is also available to DBLOG users, who can rewrite their model in the same manner. Doing so automatically would amount to performing lifted inference on BLOG models, an important future research direction. Backinstantiation Suppose we use DBLOG in a way that does not require instantiation of all hypothesized sources (as in the above model rewrite, or in some other examples, by lazy instantiation). When a DA method instantiates a new hypothesized source, it does so in constant time. In DBLOG, on the other hand, when a source is finally instantiated its state will depend on its state on the previous timestep. So that has to be instantiated as well, and it will also depend on its previous state. We end up having to instantiate the entire chain all the way to the first state, and update time for the particle becomes dependent on the length of the entire chain. The key to this disparity is, again, an ad hoc rewriting of the model. DA methods use knowledge of the specific model to derive a source state distribution conditioned on its having not being observed so far. The state of a source being observed for the first time can then be sampled from that distribution, which does not depend on previous steps. Again, this solution is available to DBLOG users by the corresponding rewrite of the model. A hidden assumption An important point to note is that the distribution conditioned on “lack of observations so far” may be hard to compute exactly. DA models (like Sittler’s) simply assume that the state distribution of a newly observed object remains the same throughout. The state of an unobserved object could depend on the other, previously observed sources, for example. Or, if certain states are more likely to be observed than others, then the lack of observations is evidence that induces a posterior state distribution. The assumptions and approximations made by DA models have gone largerly unacknowledged so far. (aircraft are a1, a2) (this blip is b1) (no blips from a2) evidence likelihood computed