Computer vision: models, learning and inference Chapter 19 Temporal models.

Slides:



Advertisements
Similar presentations
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
Advertisements

Computer vision: models, learning and inference Chapter 8 Regression.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
(Includes references to Brian Clipp
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Computer vision: models, learning and inference
From Bayesian to Particle Filter
Kalman Filter CMPUT 615 Nilanjan Ray. What is Kalman Filter A sequential state estimator for some special cases Invented in 1960’s Still very much used.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics: Kalman Filters
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Kalman Filtering COS 323. On-Line Estimation Have looked at “off-line” model estimation: all data is availableHave looked at “off-line” model estimation:
Kalman Filtering COS 323, Spring 05. Kalman Filtering Assume that results of experiment (i.e., optical flow) are noisy measurements of system stateAssume.
Probabilistic Robotics
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Computer Vision Linear Tracking Jan-Michael Frahm COMP 256 Some slides from Welch & Bishop.
Overview and Mathematics Bjoern Griesbach
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Computer vision: models, learning and inference Chapter 6 Learning and Inference in Vision.
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
Markov Localization & Bayes Filtering
Computer vision: models, learning and inference Chapter 19 Temporal models.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
SIS Sequential Importance Sampling Advanced Methods In Simulation Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash.
Probabilistic Robotics Robot Localization. 2 Localization Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Young Ki Baik, Computer Vision Lab.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
Mobile Robot Localization (ch. 7)
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
CS Statistical Machine learning Lecture 24
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
An Introduction to Kalman Filtering by Arthur Pece
State Estimation and Kalman Filtering
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Unscented Kalman Filter 1. 2 Linearization via Unscented Transform EKF UKF.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
An Introduction To The Kalman Filter By, Santhosh Kumar.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
OBJECT TRACKING USING PARTICLE FILTERS. Table of Contents Tracking Tracking Tracking as a probabilistic inference problem Tracking as a probabilistic.
Kalman Filtering And Smoothing
Tracking with dynamics
Nonlinear State Estimation
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Tracking Objects with Dynamics
PSG College of Technology
Unscented Kalman Filter
Unscented Kalman Filter
Particle Filtering.
Unscented Kalman Filter
Kalman Filtering COS 323.
Presentation transcript:

Computer vision: models, learning and inference Chapter 19 Temporal models

Goal To track object state from frame to frame in a video Difficulties: Clutter (data association) One image may not be enough to fully define state Relationship between frames may be complicated

Structure 33Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Temporal models Kalman filter Extended Kalman filter Unscented Kalman filter Particle filters Applications

4 Temporal Models Consider an evolving system Represented by an unknown vector, w This is termed the state Examples: – 2D Position of tracked object in image – 3D Pose of tracked object in world – Joint positions of articulated model OUR GOAL: To compute the marginal posterior distribution over w at time t. 4Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

5 Estimating State Two contributions to estimating the state: 1.A set of measurements x t, which provide information about the state w t at time t. This is a generative model: the measurements are derived from the state using a known probability relation Pr(x t |w 1 …w T ) 2.A time series model, which says something about the expected way that the system will evolve e.g., Pr(w t |w 1...w t-1,w t+1 …w T ) 5Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Temporal Models 6Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Only the immediate past matters (Markov) – the probability of the state at time t is conditionally independent of states at times 1...t-2 given the state at time t-1. Measurements depend on only the current state – the likelihood of the measurements at time t is conditionally independent of all of the other measurements and the states at times 1...t-1, t+1..T given the state at time t. Assumptions 7Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Graphical Model World states Measurements 8Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Recursive Estimation Time 1 Time 2 Time t from temporal model 9Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 贝叶斯法则,是指当分析样本大到接近总体数时,样本中事 件发生的概率将接近于总体中事件发生的概率。

Computing the prior (time evolution) Each time, the prior is based on the Chapman-Kolmogorov equation Prior at time tTemporal modelPosterior at time t-1 10Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Summary Temporal Evolution Measurement Update Alternate between: Temporal model Measurement model 11Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Structure 12 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Temporal models Kalman filter Extended Kalman filter Unscented Kalman filter Particle filters Applications

Kalman Filter The Kalman filter is just a special case of this type of recursive estimation procedure. Temporal model and measurement model carefully chosen so that if the posterior at time t-1 was Gaussian then the prior at time t will be Gaussian posterior at time t will be Gaussian The Kalman filter equations are rules for updating the means and covariances of these Gaussians 13Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

The Kalman Filter Previous time stepPrediction Measurement likelihood Combination 14Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Kalman Filter Definition Time evolution equation Measurement equation State transition matrix Additive Gaussian noise Relates state and measurement 15Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Kalman Filter Definition Time evolution equation Measurement equation State transition matrix Additive Gaussian noise Relates state and measurement 16Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Temporal evolution 17Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Measurement incorporation 18Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Kalman Filter This is not the usual way these equations are presented. Part of the reason for this is the size of the inverses:  is usually landscape and so  T  is inefficient Define Kalman gain: 19Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Mean Term Using Matrix inversion relations: 20Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Covariance Term Kalman Filter Using Matrix inversion relations: 21Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Final Kalman Filter Equation Innovation (difference between actual and predicted measurements Prior variance minus a term due to information from measurement 22Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Kalman Filter Summary Time evolution equation Measurement equation Inference 23Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Kalman Filter Example 1 24Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Kalman Filter Example 2 Alternates: 25Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

26 Smoothing Estimates depend only on measurements up to the current point in time. Sometimes want to estimate state based on future measurements as well Fixed Lag Smoother: This is an on-line scheme in which the optimal estimate for a state at time t -t is calculated based on measurements up to time t, where t is the time lag. i.e. we wish to calculate Pr(w t-  |x 1...x t ). Fixed Interval Smoother: We have a fixed time interval of measurements and want to calculate the optimal state estimate based on all of these measurements. In other words, instead of calculating Pr(w t |x 1...x t ) we now estimate Pr(w t |x 1...x T ) where T is the total length of the interval. 26Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

27 Fixed lag smoother 27Computer vision: models, learning and inference. ©2011 Simon J.D. Prince State evolution equation Measurement equation Estimate delayed by 

Fixed-lag Kalman Smoothing 28Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Fixed interval smoothing 29Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Backward set of recursions where Equivalent to belief propagation / forward-backward algorithm

Temporal Models 30Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Problems with the Kalman filter Requires linear temporal and measurement equations Represents result as a normal distribution: what if the posterior is genuinely multi- modal? 31Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Structure 32 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Temporal models Kalman filter Extended Kalman filter Unscented Kalman filter Particle filters Applications

Roadmap 33Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Extended Kalman Filter Allows non-linear measurement and temporal equations Key idea: take Taylor expansion and treat as locally linear 34Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Jacobians Based on Jacobians matrices of derivatives 35Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Extended Kalman Filter Equations 36Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Extended Kalman Filter 37Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Problems with EKF 38Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Structure 39 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Temporal models Kalman filter Extended Kalman filter Unscented Kalman filter Particle filters Applications

Unscented Kalman Filter Key ideas: Approximate distribution as a sum of weighted particles with correct mean and covariance Pass particles through non-linear function of the form Compute mean and covariance of transformed variables 40Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Unscented Kalman Filter Choose so that 41Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Approximate with particles:

One possible scheme With: 42Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Reconstitution 43Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Unscented Kalman Filter 44Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Measurement incorportation 45Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Measurement incorporation works in a similar way: Approximate predicted distribution by set of particles Particles chosen so that mean and covariance the same

Measurement incorportation 46Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Measurement update equations: Kalman gain now computed from particles: Pass particles through measurement equationand recompute mean and variance:

Problems with UKF 47Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Structure 48 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Temporal models Kalman filter Extended Kalman filter Unscented Kalman filter Particle filters Applications

Particle filters 49Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Key idea: Represent probability distribution as a set of weighted particles Advantages and disadvantages: + Can represent non-Gaussian multimodal densities + No need for data association - Expensive

Condensation Algorithm 50Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Stage 1: Resample from weighted particles according to their weight to get unweighted particles

Condensation Algorithm 51Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Stage 2: Pass unweighted samples through temporal model and add noise

Condensation Algorithm 52Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Stage 3: Weight samples by measurement density

Structure 53 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Temporal models Kalman filter Extended Kalman filter Unscented Kalman filter Particle filters Applications

54 Tracking pedestrians 54Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

55 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Tracking contour in clutter

56 Simultaneous localization and mapping 56Computer vision: models, learning and inference. ©2011 Simon J.D. Prince