Tracking with dynamics

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
Robot Vision SS 2005 Matthias Rüther 1 ROBOT VISION Lesson 10: Object Tracking and Visual Servoing Matthias Rüther.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
(Includes references to Brian Clipp
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Formation et Analyse d’Images Session 8
Kalman’s Beautiful Filter (an introduction) George Kantor presented to Sensor Based Planning Lab Carnegie Mellon University December 8, 2000.
… Hidden Markov Models Markov assumption: Transition model:
Prof. Trevor Darrell Lecture 19: Tracking
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
Object Detection and Tracking Mike Knowles 11 th January 2005
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Kalman Filtering Jur van den Berg. Kalman Filtering (Optimal) estimation of the (hidden) state of a linear dynamic process of which we obtain noisy (partial)
Stanford CS223B Computer Vision, Winter 2006 Lecture 11 Filters / Motion Tracking Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg.
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Computer Vision Linear Tracking Jan-Michael Frahm COMP 256 Some slides from Welch & Bishop.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
ROBOT MAPPING AND EKF SLAM
Bayesian Filtering for Robot Localization
Markov Localization & Bayes Filtering
Perceptual and Sensory Augmented Computing Computer Vision II, Summer’14 Computer Vision II – Lecture 8 Tracking with Linear Dynamic Models
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
TP15 - Tracking Computer Vision, FCUP, 2013 Miguel Coimbra Slides by Prof. Kristen Grauman.
Human-Computer Interaction Human-Computer Interaction Tracking Hanyang University Jong-Il Park.
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
SIS Sequential Importance Sampling Advanced Methods In Simulation Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Young Ki Baik, Computer Vision Lab.
Karman filter and attitude estimation Lin Zhong ELEC424, Fall 2010.
Mobile Robot Localization (ch. 7)
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
An Introduction to Kalman Filtering by Arthur Pece
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
CSE-473 Project 2 Monte Carlo Localization. Localization as state estimation.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models.
Kalman Filtering And Smoothing
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
CS 541: Artificial Intelligence Lecture VIII: Temporal Probability Models.
Tracking We are given a contour G1 with coordinates G1={x1 , x2 , … , xN} at the initial frame t=1, were the image is It=1 . We are interested in tracking.
Tracking Objects with Dynamics
Probabilistic Robotics
Kalman’s Beautiful Filter (an introduction)
Lecture 10: Observers and Kalman Filters
Particle Filtering.
Tracking Many slides adapted from Kristen Grauman, Deva Ramanan.
Nome Sobrenome. Time time time time time time..
Presentation transcript:

Tracking with dynamics Use model of expected motion to predict where objects will occur in next frame, even before seeing the image. Intent: Do less work looking for the object, restrict the search. Get improved estimates since measurement noise is tempered by smoothness, dynamics priors. Assumption: continuous motion patterns: Camera is not moving instantly to new viewpoint Objects do not disappear and reappear in different places in the scene Gradual change in pose between camera and scene

Notation reminder Random variable with Gaussian probability distribution that has the mean vector μ and covariance matrix Σ. x and μ are d-dimensional, Σ is d x d. d=2 d=1 If x is 1-d, we just have one Σ parameter - the variance: σ2

Tracking as inference The hidden state consists of the true parameters we care about, denoted X. The measurement is our noisy observation that results from the underlying state, denoted Y.

State vs. observation Hidden state : parameters of interest Measurement : what we get to directly observe

Tracking as inference The hidden state consists of the true parameters we care about, denoted X. The measurement is our noisy observation that results from the underlying state, denoted Y. At each time step, state changes (from Xt-1 to Xt ) and we get a new observation Yt. Our goal: recover most likely state Xt given All observations seen so far. Knowledge about dynamics of state transitions.

Tracking as inference: intuition measurement Belief: prediction Belief: prediction Corrected prediction measurement old belief Time t Time t+1

Standard independence assumptions Only immediate past state influences current state Measurements at time i only depend on the current state

Tracking as inference Prediction: Correction: Given the measurements we have seen up to this point, what state should we predict? Correction: Now given the current measurement, what state should we predict?

Tracking as inference Recursive process: Base case: we have an initial prior P(X0) on the state in absence of any evidence, which we can correct based on the first measurement Y0=y0. Given corrected estimate for frame t: Predict for frame t+1 Correct for frame t+1

Questions How to represent the known dynamics that govern the changes in the states? How to represent relationship between state and measurements, plus our uncertainty in the measurements? How to compute each cycle of updates? Representation: We’ll consider the class of linear dynamic models, with associated Gaussian pdfs. Updates: via the Kalman filter.

Linear dynamic model Describe the a priori knowledge about System dynamics model: represents evolution of state over time, with noise. Measurement model: at every time step we get a noisy measurement of the state. n x 1 n x n n x 1 m x 1 m x n n x 1

Example: randomly drifting points Consider a stationary object, with state as position Position is constant, only motion due to random noise term. State evolution is described by identity matrix D=I

Example: Constant velocity (1D points) 1 d position measurements 1 d position states time

Example: Constant velocity (1D points) State vector: position p and velocity v Measurement is position only (greek letters denote noise terms)

Example: Constant acceleration (1D points)

Example: Constant acceleration (1D points) State vector: position p, velocity v, and acceleration a. Measurement is position only (greek letters denote noise terms)

Questions How to represent the known dynamics that govern the changes in the states? How to represent relationship between state and measurements, plus our uncertainty in the measurements? How to compute each cycle of updates? Representation: We’ll consider the class of linear dynamic models, with associated Gaussian pdfs. Updates: via the Kalman filter.

The Kalman filter Method for tracking linear dynamical models in Gaussian noise The predicted/corrected state distributions are Gaussian Only need to maintain the mean and covariance The calculations are easy (all the integrals can be done in closed form)

Kalman filter Time update (“Predict”) Measurement update (“Correct”) Know corrected state from previous time step, and all measurements up to the current one  Predict distribution over next state. Know prediction of state, and next measurement  Update distribution over current state. Receive measurement Time update (“Predict”) Measurement update (“Correct”) Mean and std. dev. of predicted state: Time advances: t++ Mean and std. dev. of corrected state:

Kalman filter for 1d state Want to represent and update

1D Kalman filter: Prediction Have linear dynamic model defining predicted state evolution, with noise Want to estimate predicted distribution for next state Update the mean: Update the variance:

1D Kalman filter: Correction Have linear model defining the mapping of state to measurements: Want to estimate corrected distribution given latest meas.: Update the mean: Update the variance:

Prediction vs. correction What if there is no prediction uncertainty What if there is no measurement uncertainty The measurement is ignored! The prediction is ignored!

Recall: constant velocity example position measurements state time State is 2d: position + velocity Measurement is 1d: position

Constant velocity model Kalman filter processing o state x measurement * predicted mean estimate + corrected mean estimate bars: variance estimates before and after measurements position time

Constant velocity model Kalman filter processing o state x measurement * predicted mean estimate + corrected mean estimate bars: variance estimates before and after measurements position time

Constant velocity model Kalman filter processing o state x measurement * predicted mean estimate + corrected mean estimate bars: variance estimates before and after measurements position time

Constant velocity model Kalman filter processing o state x measurement * predicted mean estimate + corrected mean estimate bars: variance estimates before and after measurements position time

Kalman filter: General case (> 1dim) What if state vectors have more than one dimension? PREDICT CORRECT More weight on residual when measurement error covariance approaches 0. Less weight on residual as a priori estimate error covariance approaches 0.

Alternative: particle-filtering and non-Gaussian densities Can represent distribution with set of weighted samples (“particles”) Allows us to maintain multiple hypotheses. For details: CONDENSATION -- conditional density propagation for visual tracking, by Michael Isard and Andrew Blake, Int. J. Computer Vision, 29, 1, 5--28, (1998) 30