2 Introduction to Kalman Filters Michael Williams 5 June 2003.

Slides:



Advertisements
Similar presentations
Introduction to Kalman Filters
Advertisements

TARGET DETECTION AND TRACKING IN A WIRELESS SENSOR NETWORK Clement Kam, William Hodgkiss, Dept. of Electrical and Computer Engineering, University of California,
Probabilistic Reasoning over Time
Kriging.
Cost of surrogates In linear regression, the process of fitting involves solving a set of linear equations once. For moving least squares, we need to form.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Cost of surrogates In linear regression, the process of fitting involves solving a set of linear equations once. For moving least squares, we need to.
Russell and Norvig, AIMA : Chapter 15 Part B – 15.3,
Observers and Kalman Filters
Presenter: Yufan Liu November 17th,
Kalman’s Beautiful Filter (an introduction) George Kantor presented to Sensor Based Planning Lab Carnegie Mellon University December 8, 2000.
1 Introduction to Kalman Filters Michael Williams 5 June 2003.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Prepared By: Kevin Meier Alok Desai
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Overview and Mathematics Bjoern Griesbach
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Slam is a State Estimation Problem. Predicted belief corrected belief.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Kalman filter and SLAM problem
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
Mobile Robot controlled by Kalman Filter
Gaussian process modelling
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations.
Karman filter and attitude estimation Lin Zhong ELEC424, Fall 2010.
Human-Computer Interaction Kalman Filter Hanyang University Jong-Il Park.
The “ ” Paige in Kalman Filtering K. E. Schubert.
Pg 1 of 10 AGI Sherman’s Theorem Fundamental Technology for ODTK Jim Wright.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
An Introduction to Kalman Filtering by Arthur Pece
Lesson 2 – kalman Filters
HCI / CprE / ComS 575: Computational Perception Instructor: Alexander Stoytchev
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Unscented Kalman Filter 1. 2 Linearization via Unscented Transform EKF UKF.
Principles of Radar Target Tracking The Kalman Filter: Mathematical Radar Analysis.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
Tracking with dynamics
By: Aaron Dyreson Supervising Professor: Dr. Ioannis Schizas
Nonlinear State Estimation
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Robust Localization Kalman Filter & LADAR Scans
VANET – Stochastic Path Prediction Motivations Route Discovery Safety Warning Accident Notifications Strong Deceleration of Tra ffi c Flow Road Hazards.
Using Sensor Data Effectively
ASEN 5070: Statistical Orbit Determination I Fall 2014
Kalman’s Beautiful Filter (an introduction)
Unscented Kalman Filter
Unscented Kalman Filter
Probabilistic Robotics
Lecture 10: Observers and Kalman Filters
Kalman Filter فيلتر كالمن در سال 1960 توسط R.E.Kalman در مقاله اي تحت عنوان زير معرفي شد. “A new approach to liner filtering & prediction problem” Transactions.
Unscented Kalman Filter
Bayes and Kalman Filter
Kalman Filtering COS 323.
Principles of the Global Positioning System Lecture 13
The Discrete Kalman Filter
Presentation transcript:

2 Introduction to Kalman Filters Michael Williams 5 June 2003

3 Overview The Problem – Why do we need Kalman Filters? What is a Kalman Filter? Conceptual Overview The Theory of Kalman Filter Simple Example

4 The Problem System state cannot be measured directly Need to estimate “optimally” from measurements Measuring Devices Estimator Measurement Error Sources System State (desired but not known) External Controls Observed Measurements Optimal Estimate of System State System Error Sources System Black Box

5 What is a Kalman Filter? Recursive data processing algorithm Generates optimal estimate of desired quantities given the set of measurements Optimal? –For linear system and white Gaussian errors, Kalman filter is “best” estimate based on all previous measurements –For non-linear system optimality is ‘qualified’ Recursive? –Doesn’t need to store all previous measurements and reprocess all data each time step

6 Conceptual Overview Simple example to motivate the workings of the Kalman Filter Theoretical Justification to come later – for now just focus on the concept Important: Prediction and Correction

7 Conceptual Overview Lost on the 1-dimensional line Position – y(t) Assume Gaussian distributed measurements y

8 Conceptual Overview Sextant Measurement at t 1 : Mean = z 1 and Variance =  z1 Optimal estimate of position is: ŷ(t 1 ) = z 1 Variance of error in estimate:  2 x (t 1 ) =  2 z1 Boat in same position at time t 2 - Predicted position is z 1

9 Conceptual Overview So we have the prediction ŷ - (t 2 ) GPS Measurement at t 2 : Mean = z 2 and Variance =  z2 Need to correct the prediction due to measurement to get ŷ(t 2 ) Closer to more trusted measurement – linear interpolation? prediction ŷ - (t 2 ) measurement z(t 2 )

10 Conceptual Overview Corrected mean is the new optimal estimate of position New variance is smaller than either of the previous two variances measurement z(t 2 ) corrected optimal estimate ŷ(t 2 ) prediction ŷ - (t 2 )

11 Conceptual Overview Lessons so far: Make prediction based on previous data - ŷ -,  - Take measurement – z k,  z Optimal estimate (ŷ) = Prediction + (Kalman Gain) * (Measurement - Prediction) Variance of estimate = Variance of prediction * (1 – Kalman Gain)

12 Conceptual Overview At time t 3, boat moves with velocity dy/dt=u Naïve approach: Shift probability to the right to predict This would work if we knew the velocity exactly (perfect model) ŷ(t 2 ) Naïve Prediction ŷ - (t 3 )

13 Conceptual Overview Better to assume imperfect model by adding Gaussian noise dy/dt = u + w Distribution for prediction moves and spreads out ŷ(t 2 ) Naïve Prediction ŷ - (t 3 ) Prediction ŷ - (t 3 )

14 Conceptual Overview Now we take a measurement at t 3 Need to once again correct the prediction Same as before Prediction ŷ - (t 3 ) Measurement z(t 3 ) Corrected optimal estimate ŷ(t 3 )

15 Conceptual Overview Lessons learnt from conceptual overview: –Initial conditions (ŷ k-1 and  k-1 ) –Prediction (ŷ - k,  - k ) Use initial conditions and model (eg. constant velocity) to make prediction – Measurement (z k ) Take measurement –Correction (ŷ k,  k ) Use measurement to correct prediction by ‘blending’ prediction and residual – always a case of merging only two Gaussians Optimal estimate with smaller variance

Major equation 16

17 Theoretical Basis Process to be estimated: y k = Ay k-1 + Bu k + w k-1 z k = Hy k + v k Process Noise (w) with covariance Q Measurement Noise (v) with covariance R Kalman Filter Predicted: ŷ - k is estimate based on measurements at previous time-steps ŷ k = ŷ - k + K(z k - H ŷ - k ) Corrected: ŷ k has additional information – the measurement at time k K = P - k H T (HP - k H T + R) -1 ŷ - k = Ay k-1 + Bu k P - k = AP k-1 A T + Q P k = (I - KH)P - k

18 Blending Factor If we are sure about measurements: –Measurement error covariance (R) decreases to zero –K decreases and weights residual more heavily than prediction If we are sure about prediction –Prediction error covariance P - k decreases to zero –K increases and weights prediction more heavily than residual

19 Theoretical Basis ŷ - k = Ay k-1 + Bu k P - k = AP k-1 A T + Q Prediction (Time Update) (1) Project the state ahead (2) Project the error covariance ahead Correction (Measurement Update) (1) Compute the Kalman Gain (2) Update estimate with measurement z k (3) Update Error Covariance ŷ k = ŷ - k + K(z k - H ŷ - k ) K = P - k H T (HP - k H T + R) -1 P k = (I - KH)P - k

20 Quick Example – Constant Model Measuring Devices Estimator Measurement Error Sources System State External Controls Observed Measurements Optimal Estimate of System State System Error Sources System Black Box

A simple example Estimate a random constant:” voltage” reading from a source. It has a constant value of aV (volts), so there is no control signal u k. Standard deviation of the measurement noise is 0.1 V. It is a 1 dimensional signal problem: A and H are constant 1. Assume the error covariance P 0 is initially 1 and initial state X 0 is 0. 21

A simple example – Part 1 Time Value

A simple example – Part 2 23

A simple example – Part 3 24

Result of the example 25

26 References 1.Kalman, R. E “A New Approach to Linear Filtering and Prediction Problems”, Transaction of the ASME-- Journal of Basic Engineering, pp (March 1960). 2.Maybeck, P. S “Stochastic Models, Estimation, and Control, Volume 1”, Academic Press, Inc. 3.Welch, G and Bishop, G “An introduction to the Kalman Filter”,