Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Observers and Kalman Filters
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Discriminative Training of Kalman Filters P. Abbeel, A. Coates, M
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Principal component analysis (PCA) Purpose of PCA Covariance and correlation matrices PCA using eigenvalues PCA using singular value decompositions Selection.
Lecture II-2: Probability Review
Modern Navigation Thomas Herring
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Principles of Least Squares
Today Wrap up of probability Vectors, Matrices. Calculus
ROBOT MAPPING AND EKF SLAM
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
: Appendix A: Mathematical Foundations 1 Montri Karnjanadecha ac.th/~montri Principles of.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 5 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
Lecture 7: Simulations.
Probability in Robotics
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
/09/dji-phantom-crashes-into- canadian-lake/
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)
Linear Functions 2 Sociology 5811 Lecture 18 Copyright © 2004 by Evan Schofer Do not copy or distribute without permission.
Physics 114: Exam 2 Review Lectures 11-16
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Modern Navigation Thomas Herring
Human-Computer Interaction Kalman Filter Hanyang University Jong-Il Park.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
Mobile Robot Localization (ch. 7)
Principles of the Global Positioning System Lecture 12 Prof. Thomas Herring Room ;
Probability in Robotics Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Probability in Robotics Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since.
By: Aaron Dyreson Supervising Professor: Dr. Ioannis Schizas
Extended Kalman Filter
Review of statistical modeling and probability theory Alan Moses ML4bio.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Robust Localization Kalman Filter & LADAR Scans
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
VANET – Stochastic Path Prediction Motivations Route Discovery Safety Warning Accident Notifications Strong Deceleration of Tra ffi c Flow Road Hazards.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Uncertainty and Error Propagation
Linear Algebra Review.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
CH 5: Multivariate Methods
PSG College of Technology
Lecture 10: Observers and Kalman Filters
Filtering and State Estimation: Basic Concepts
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
A Short Introduction to the Bayes Filter and Related Models
Bayes and Kalman Filter
Principles of the Global Positioning System Lecture 11
Kalman Filtering COS 323.
Principles of the Global Positioning System Lecture 13
Parametric Methods Berlin Chen, 2005 References:
Uncertainty and Error Propagation
Probability in Robotics
Presentation transcript:

Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson

Course Examination 5 Exercises – Matlab exercises – Written report (groups of two students - individual reports) 1 Written exam – In the end of the period

Course information Lectures Exercises (w. 15, 17-20) – will be available from the course web page Paper discussions (w , 17-21) – questions will be available from the course web page => Course literature Web page:

The dream of fully autonomous robots R2D2 C3POMARVIN ?

Autonomous robots of today Stationary robots - Assembly - Sorting - Packing Mobile robots - Service robots - Guarding - Cleaning - Hazardous env. - Humanoid robots Trilobite ZA1, Image from:

A typical mobile system Laser (Bearing + Range) Laser (Bearing) Laser (Bearing + Range) Encoders Motors Panels Kinematic model Micro controllers Control systems Emergency stop

Mobile Int. Systems – Basic questions 1. Where am I? (Self localization) 2. Where am I going? (Navigation) 3. How do I get there? (Decision making / path planning) 4. How do I interact with the environment? (Obstacle avoidance) 5. How do I interact with the user? (Human –Machine interface)

Sensors – How to use them Sensor fusion

Course contents Basic statistics Kinematic models, Error predictions Sensors for mobile robots Different methods for sensor fusion, Kalman filter, Bayesian methods and others Research issues in mobile robotics

Basic statistics – Statistical representation – Stochastic variable Battery lasting time, X = 5hours ±1hour X can have many different values Continous – The variable can have any value within the bounds Discrete – The variable can have specific (discrete) values

Basic statistics – Statistical representation – Stochastic variable Another way of discribing the stochastic variable, i.e. by another form of bounds In 68%: x11 < X < x12 In 95%: x21 < X < x22 In 99%: x31 < X < x32 In 100%: -  < X <  The value to expect is the mean value => Expected value How much X varies from its expected value => Variance Probability distribution

Expected value and Variance The standard deviation  X is the square root of the variance

Gaussian (Normal) distribution By far the mostly used probability distribution because of its nice statistical and mathematical properties What does it means if a specification tells that a sensor measures a distance [mm] and has an error that is normally distributed with zero mean and  = 100mm? Normal distribution: ~68.3% ~95% ~99% etc.

Estimate of the expected value and the variance from observations

Linear combinations (1) X 1 ~ N(m 1, σ 1 ) X 2 ~ N(m 2, σ 2 ) Y ~ N(m 1 + m 2, sqrt(σ 1 +σ 2 )) This property that Y remains Gaussian if the s.v. are combined linearily is one of the great properties of the Gaussian distribution!

Linear combinations (2) We measure a distance by a device that have normally distributed errors, Do we win something of making a lot of measurements and use the average value instead? What will the expected value of Y be? What will the variance (and standard deviation) of Y be? If you are using a sensor that gives a large error, how would you best use it?

Linear combinations (3) d i is the mean value and  d ~ N(0, σ d ) α i is the mean value and  α ~ N(0, σ α ) With  d and  α un- correlated => V[  d,  α ] = 0 (Actually the co- variance, which is defined later)

Linear combinations (4) D = {The total distance} is calculated as before as this is only the sum of all d’s The expected value and the variance become:

Linear combinations (5)  = {The heading angle} is calculated as before as this is only the sum of all  ’s, i.e. as the sum of all changes in heading The expected value and the variance become: What if we want to predict X (distance parallel to the ground) (or Y (hight above ground)) from our measured d’s and  ’s?

Non-linear combinations (1) X(N) is the previous value of X plus the latest movement (in the X direction) The estimate of X(N) becomes: This equation is non-linear as it contains the term: and for X(N) to become Gaussian distributed, this equation must be replaced with a linear approximation around. To do this we can use the Taylor expansion of the first order. By this approximation we also assume that the error is rather small! With perfectly known  N-1 and  N-1 the equation would have been linear!

Non-linear combinations (2) Use a first order Taylor expansion and linearize X(N) around. This equation is linear as all error terms are multiplied by constants and we can calculate the expected value and the variance as we did before.

Non-linear combinations (3) The variance becomes (calculated exactly as before): Two really important things should be noticed, first the linearization only affects the calculation of the variance and second (which is even more important) is that the above equation is the partial derivatives of: with respect to our uncertain parameters squared multiplied with their variance!

Non-linear combinations (4) This result is very good => an easy way of calculating the variance => the law of error propagation The partial derivatives of become:

Non-linear combinations (5) The plot shows the variance of X for the time step 1, …, 20 and as can be noticed the variance (or standard deviation) is constantly increasing.  d = 1/10   = 5/360

Multidimensional Gaussian distributions (1) The Gaussian distribution can easily be extended for several dimensions by: replacing the variance (  ) by a co-variance matrix (  ) and the scalars (x and m X ) by column vectors. The CVM describes (consists of): 1) the variances of the individual dimensions => diagonal elements 2) the co-variances between the different dimensions => off-diagonal elements ! Symmetric ! Positive definite

MGD (2) Eigenvalues => standard deviations Eigenvectors => rotation of the ellipses

MGD (3) The co-variance between two stochastic variables is calculated as: Which for a discrete variable becomes: And for a continuous variable becomes:

MGD (4) - Non-linear combinations The state variables (x, y,  ) at time k+1 become:

MGD (5) - Non-linear combinations We know that to calculate the variance (or co-variance) at time step k+1 we must linearize Z(k+1) by e.g. a Taylor expansion - but we also know that this is done by the law of error propagation, which for matrices becomes: With  f X and  f U are the Jacobian matrices (w.r.t. our uncertain variables) of the state transition matrix.

MGD (6) - Non-linear combinations The uncertainty ellipses for X and Y (for time step ) is shown in the figure.