Nonlinear State Estimation

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

EKF, UKF TexPoint fonts used in EMF.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Robot Localization Using Bayesian Methods
Observers and Kalman Filters
Kalman Filter CMPUT 615 Nilanjan Ray. What is Kalman Filter A sequential state estimator for some special cases Invented in 1960’s Still very much used.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics: Kalman Filters
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Prepared By: Kevin Meier Alok Desai
Probabilistic Robotics
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
G. Hendeby Recursive Triangulation Using Bearings-Only Sensors TARGET ‘06 Austin Court, Birmingham Recursive Triangulation Using Bearings-Only Sensors.
ROBOT MAPPING AND EKF SLAM
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
1 Miodrag Bolic ARCHITECTURES FOR EFFICIENT IMPLEMENTATION OF PARTICLE FILTERS Department of Electrical and Computer Engineering Stony Brook University.
1 Mohammed M. Olama Seddik M. Djouadi ECE Department/University of Tennessee Ioannis G. PapageorgiouCharalambos D. Charalambous Ioannis G. Papageorgiou.
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
An Application Of The Divided Difference Filter to Multipath Channel Estimation in CDMA Networks Zahid Ali, Mohammad Deriche, M. Andan Landolsi King Fahd.
Computer vision: models, learning and inference Chapter 19 Temporal models.
SIS Sequential Importance Sampling Advanced Methods In Simulation Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Probabilistic Robotics Robot Localization. 2 Localization Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
HQ U.S. Air Force Academy I n t e g r i t y - S e r v i c e - E x c e l l e n c e Improving the Performance of Out-of-Order Sigma-Point Kalman Filters.
The “ ” Paige in Kalman Filtering K. E. Schubert.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
An Introduction to Kalman Filtering by Arthur Pece
State Estimation and Kalman Filtering
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Unscented Kalman Filter 1. 2 Linearization via Unscented Transform EKF UKF.
Dept. E.E./ESAT-STADIUS, KU Leuven
An Introduction To The Kalman Filter By, Santhosh Kumar.
Wei Sun and KC Chang George Mason University March 2008 Convergence Study of Message Passing In Arbitrary Continuous Bayesian.
S.Patil, S. Srinivasan, S. Prasad, R. Irwin, G. Lazarou and J. Picone Intelligent Electronic Systems Center for Advanced Vehicular Systems Mississippi.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Unscented Kalman Filter (UKF) CSCE 774 – Guest Lecture Dr. Gabriel Terejanu Fall 2015.
Spectrum Sensing In Cognitive Radio Networks
Tracking with dynamics
By: Aaron Dyreson Supervising Professor: Dr. Ioannis Schizas
Extended Kalman Filter
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
VANET – Stochastic Path Prediction Motivations Route Discovery Safety Warning Accident Notifications Strong Deceleration of Tra ffi c Flow Road Hazards.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Unscented Kalman Filter for a coal run-of-mine bin
PSG College of Technology
Unscented Kalman Filter
Unscented Kalman Filter
CARNEGIE MELLON UNIVERSITY
Filtering and State Estimation: Basic Concepts
Unscented Kalman Filter
6.891 Computer Experiments for Particle Filtering
Presentation transcript:

Kalman Filters z-1I C(n) F(n+1,n) Process Equation Observation y(n) x(n) Process Noise, v1(n) z-1I C(n) s(n) F(n+1,n) Measurement noise v2(n) Process Equation Measurement Equation Models the state x(n) as the output of a linear system excited by white noise v1(n) Relates the observable output of the system y(n) to the state x(n) F(n+1,n) denotes the state transition matrix for the linear system, relating the state at time n+1 with that at time n. The fact that n,n+1 appear in the argument emphasizes the fact that a general Kalman filter setup can model non-stationary systems, in which the state transition matrix changes with time. x(n/script_y_sub_n) represents the state estimate at time n given the ‘n’ previous observations y1, y2, …. yn. Signal Model: Observation Model: The filtering problem is posed as follows – Using the set of observed data , to find for each the MMSE of the state vector , and then to estimate the clean signal from the signal model.

Nonlinear State Estimation Unscented Kalman Filters (motivation… where Extended Kalman filters fail) Nonlinear State Estimation Predicting state vector, Kalman gain, Predicting observation When propagating a GRV through a linear model (Kalman filtering), it is easy to compute the expectation E[ ] However, if the same GRV passes through a nonlinear model, the resultant estimate may no longer be Gaussian – calculation of E[ ] is no longer easy! It is in the calculation of E[ ] in the recursive filtering process that we now need to estimate the pdf of the propagating random vector F, H are (nonlinear) vector functions a Posteriori estimate Prediction of Innovation The way Kalman filtering proceeds recursively is as follows -- (a) predict the state vectors at time k using the previous state estimate and process noise -- this is represented as x_hat_superscript'-'. (a) predict the observation vector at time k using the 'predicted' state vector at time k and the measurement noise -- this is represented as y_hat_superscript'-'. (b) the actual observation at time 'k' -- yk, brings with it some additional information relative to all previous n-1 observations -- this extra information is represented by the innovations process which is given as the difference between actual observation at time k (yk) and the predicted observation of time k (predicted in (a) above). (b) this innovation term is multiplied by the 'kalman-gain' to give an accurate 'estimate' of the state at time 'k'. Hence, the superscript '-' distinguishes the 'predicted' value and the 'estimated' value. We are ultimately interested in the 'estimated' state values at any time step. P (in the expression for Kalman Gain) denotes the covariance matrix. Some notes on EKF – It is possible to extend the linear state-space Kalman filters to systems exhibiting nonlinear dynamic behavior through a linearization procedure. Most other linear techniques can’t exploit this extension - such as Wiener filtering (which is limited to linear systems). The Wiener filter is restricted to linear systems because the concept of impulse response (which is the basic notion of wiener filtering) is meaningful only in the context of linear systems

Unscented Kalman Filters (motivation… where Extended Kalman filters fail) Extended Kalman Filters (EKF) avoid this by linearizing the state-space equations and reducing the estimate equations to point estimates instead of expectations The covariance matrices (P) are computed by linearizing the dynamic equations In EKF, the state distributions are approximated by a GRV and propagated analytically through a ‘first-order linearization’ of the nonlinear system - If the assumption of local linearity is violated, the resulting filter will be unstable - The linearization process itself involves computations of Jacobians and Hessians (which is nontrivial for most systems)

The Unscented Transform Problem statement: developing a method for calculating the statistics of a random variable which has undergone a nonlinear transformation The intuition: It is easier to approximate a distribution than it is to approximate an arbitrary nonlinear transformation Consider a nonlinearity The Unscented transform chooses (deterministically) a set of ‘sigma’ points in the original (x) space which when mapped to the transformed (y) space will be capable of generating accurate sample means and covariance matrices in the transformed space An accurate estimation of these allows us to use the Kalman filter in the usual setup, with the expectation values being replaced by sample means and covariance matrices as appropriate

The Unscented Transform (illustration) 0.2812 0.0046 0.0046 0.2014 Px 1.2692 0.0060 0.0060 0.8818 0.2709 0.0510 0.0510 0.1607 Py Py_ut

Unscented Kalman Filters (UKFs) Instead of propagating a GRV through the “linearized” system, the UKF technique focuses on an efficient estimation of the statistics of the random vector being propagated through a nonlinear function To compute the statistics of , we ‘cleverly’ sample this function using ‘deterministically’ chosen 2L+1 points about the mean of x It can be proved that with these as sample points, the following ‘weighted’ sample means and covariance matrices will be equal to the true values We use this set of equations to predict the states and observations in the nonlinear state-estimation model. The expected values E[ ] and the covariance matrices are replaced with these sample ‘weighted’ versions. Sigma Points

Unscented Kalman Filters (UKFs)… Algorithm development Augment the state vector to include the process and measurement noise and initialize the mean and covariance matrix of the augmented vector. Compute the sigma points about the previously estimated mean vector Compute the predicted value of the state vector and cov. matrix at time k Predict the current observation given previous observed samples (mapped from the sigma points) 1’st step – only initialization, (no expectation calculation) – compute the expectation only if we have some prior information of statistics of state vector. - Augment the state vector with 2 terms (initialized as 0 vectors) that allow to include v and n if we don’t know the covar. matrices of these 2 before hand. (has a flavor of adaptive kalman filter – where we don’t have knowledge of the covar. Matrices. ). Using this augmented vector allows us to incorporate for systems where the process and measurement noise statistics also need to be updated recursively (along with the state vectors). In many applications, this may not be necessary (if the process and measurement noise statistics are known a priori). - At every time step k, we compute the sigma points afresh – the I’th sigma point will be the state mean vector x_hat_a + the I’th column of the square root of the (L+lambda)P. 2n’d step – compute prediction of states – this is achieved by computing the expected value (achieved by the weighted sum of ) the nonlinearity (F) at the sigma points. - compute the measurement updates using the measurement equation on the sigma points - Calculate the Kalman gain (after having calculated the appropriate covariance matrices). Reminders: Anything with superscript ‘–’ is predicted estimate. When added to the Kalman Gain * innovations, this predicted estimate changes to a ‘filtered’ estimate of the state (which is most accurate given the k observation sequences y1, y2…. yk. Compute the innovations covariance matrix, cross correlation matrix and hence the Kalman gain and update the current estimate of the state at time k Eric A. Wan and Rudolph van der Merwe, “The Unscented Kalman Filter for Nonlinear Estimation”

Unscented Kalman Filters (UKFs) vs. Particle Filters (PFs) The unscented transformation in UKFs allows the calculation of accurate sample means and covariance matrices using a few ‘cleverly’ chosen samples Particle filtering employs Monte-Carlo techniques to obtain an estimate of the statistics of the propagated random vector The two techniques are very close conceptually There is a subtle difference : in UKF based implementations, the sigma points for sampling are deterministically chosen whereas in PF based techniques, the sampling itself is done randomly Particle Filtering is more accurate but this comes at the cost of complexity (introduced due to the need for Monte-Carlo simulations) The complexity of UKF based implementations is about the same as that of regular Kalman filters