Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probability in Robotics Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since.

Similar presentations


Presentation on theme: "Probability in Robotics Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since."— Presentation transcript:

1

2 Probability in Robotics

3 Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since mid-90’s) seamless integration of models and sensing inaccurate models, inaccurate sensors Hybrids (since 90’s) model-based at higher levels reactive at lower levels Classical Robotics (mid-70’s) exact models no sensing necessary

4 Advantages of Probabilistic Paradigm Can accommodate inaccurate models Can accommodate imperfect sensors Robust in real-world applications Best known approach to many hard robotics problems Pays Tribute to Inherent Uncertainty Know your own ignorance Scalability No need for “perfect” world model Relieves programmers

5 Limitations of Probability Computationally inefficient –Consider entire probability densities Approximation –Representing continuous probability distributions.

6 Uncertainty Representation

7 Five Sources of Uncertainty Environment Dynamics Random Action Effects Sensor Limitations Inaccurate Models Approximate Computation

8 Nature of Sensor Data Odometry Data Range Data

9 Sensor inaccuracy Environmental Uncertainty

10 How do we Solve Localization Uncertainty? Represent beliefs as a probability density Markov assumption Pose distribution at time t conditioned on: pose dist. at time t-1 movement at time t-1 sensor readings at time t Discretize the density by sampling

11 Probabilistic Action model Continuous probability density Bel(s t ) after moving 40m (left figure) and 80m (right figure). Darker area has higher probablity. s t- 1 a t-1 p(s t |a t-1,s t-1 ) At every time step t: UPDATE each sample’s new location based on movement RESAMPLE the pose distribution based on sensor readings

12 Localization Initial state detects nothing: Moves and detects landmark: Moves and detects nothing: Moves and detects landmark:

13 Globalization Localization without knowledge of start location

14 Probabilistic Robotics: Basic Idea Key idea: Explicit representation of uncertainty using probability theory Perception = state estimation Action = utility optimization

15 Advantages and Pitfalls Can accommodate inaccurate models Can accommodate imperfect sensors Robust in real-world applications Best known approach to many hard robotics problems Computationally demanding False assumptions Approximate

16 Pr(A) denotes probability that proposition A is true. Axioms of Probability Theory

17 A Closer Look at Axiom 3 B

18 Using the Axioms

19 Discrete Random Variables X denotes a random variable. X can take on a finite number of values in {x 1, x 2, …, x n }. P(X=x i ), or P(x i ), is the probability that the random variable X takes on value x i. P(  ) is called probability mass function. E.g.

20 Continuous Random Variables X takes on values in the continuum. p(X=x), or p(x), is a probability density function. E.g. x p(x)

21 Joint and Conditional Probability P(X=x and Y=y) = P(x,y) If X and Y are independent then P(x,y) = P(x) P(y) P(x | y) is the probability of x given y P(x | y) = P(x,y) / P(y) P(x,y) = P(x | y) P(y) If X and Y are independent then P(x | y) = P(x)

22 Law of Total Probability Discrete caseContinuous case

23 Thomas Bayes (1702-1761) Clergyman and mathematician who first used probability inductively and established a mathematical basis for probability inference

24 Bayes Formula

25 Normalization

26 Conditioning Total probability: Bayes rule and background knowledge:

27 Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open|z)?

28 Causal vs. Diagnostic Reasoning P(open|z) is diagnostic. P(z|open) is causal. Often causal knowledge is easier to obtain. Bayes rule allows us to use causal knowledge: count frequencies!

29 Example P(z|open) = 0.6P(z|  open) = 0.3 P(open) = P(  open) = 0.5 z raises the probability that the door is open.

30 Combining Evidence Suppose our robot obtains another observation z 2. How can we integrate this new information? More generally, how can we estimate P(x| z 1...z n ) ?

31 Recursive Bayesian Updating Markov assumption: z n is independent of z 1,...,z n-1 if we know x.

32 Example: Second Measurement P(z 2 |open) = 0.5P(z 2 |  open) = 0.6 P(open|z 1 )= 2/3 z 2 lowers the probability that the door is open.

33 Actions Often the world is dynamic since –actions carried out by the robot, –actions carried out by other agents, –or just the time passing by change the world. How can we incorporate such actions?

34 Typical Actions The robot turns its wheels to move The robot uses its manipulator to grasp an object Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty.

35 Modeling Actions To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P(x|u,x’) This term specifies the pdf that executing u changes the state from x’ to x.

36 Example: Closing the door

37 State Transitions P(x|u,x’) for u = “close door”: If the door is open, the action “close door” succeeds in 90% of all cases.

38 Integrating the Outcome of Actions Continuous case: Discrete case:

39 Example: The Resulting Belief

40 Robot Environment Interaction State transition probability measurement probability

41 How all of this relates to Sensors and navigation? Sensor fusion

42 Basic statistics – Statistical representation – Stochastic variable Travel time, X = 5hours ±1hour X can have many different values Continous – The variable can have any value within the bounds Discrete – The variable can have specific (discrete) values

43 Basic statistics – Statistical representation – Stochastic variable Another way of describing the stochastic variable, i.e. by another form of bounds In 68%: x11 < X < x12 In 95%: x21 < X < x22 In 99%: x31 < X < x32 In 100%: -  < X <  The value to expect is the mean value => Expected value How much X varies from its expected value => Variance Probability distribution

44 Expected value and Variance The standard deviation  X is the square root of the variance

45 Gaussian (Normal) distribution By far the mostly used probability distribution because of its nice statistical and mathematical properties What does it means if a specification tells that a sensor measures a distance [mm] and has an error that is normally distributed with zero mean and  = 100mm? Normal distribution: ~68.3% ~95% ~99% etc.

46 Estimate of the expected value and the variance from observations

47 Linear combinations (1) X 1 ~ N(m 1, σ 1 ) X 2 ~ N(m 2, σ 2 ) Y ~ N(m 1 + m 2, sqrt(σ 1 +σ 2 )) Since linear combination of Gaussian variables is another Gaussian variable, Y remains Gaussian if the s.v. are combined linearly!

48 Linear combinations (2) We measure a distance by a device that have normally distributed errors, Do we win something of making a lot of measurements and use the average value instead? What will the expected value of Y be? What will the variance (and standard deviation) of Y be? If you are using a sensor that gives a large error, how would you best use it?

49 Linear combinations (3) d i is the mean value and  d ~ N(0, σ d ) α i is the mean value and  α ~ N(0, σ α ) With  d and  α un- correlated => V[  d,  α ] = 0 (co-variance is zero)

50 Linear combinations (4) D = {The total distance} is calculated as before as this is only the sum of all d’s The expected value and the variance become:

51 Linear combinations (5)  = {The heading angle} is calculated as before as this is only the sum of all  ’s, i.e. as the sum of all changes in heading The expected value and the variance become: What if we want to predict X and Y from our measured d’s and  ’s?

52 Non-linear combinations (1) X(N) is the previous value of X plus the latest movement (in the X direction) The estimate of X(N) becomes: This equation is non-linear as it contains the term: and for X(N) to become Gaussian distributed, this equation must be replaced with a linear approximation around. To do this we can use the Taylor expansion of the first order. By this approximation we also assume that the error is rather small! With perfectly known  N-1 and  N-1 the equation would have been linear!

53 Non-linear combinations (2) Use a first order Taylor expansion and linearize X(N) around. This equation is linear as all error terms are multiplied by constants and we can calculate the expected value and the variance as we did before.

54 Non-linear combinations (3) The variance becomes (calculated exactly as before): Two really important things should be noticed, first the linearization only affects the calculation of the variance and second (which is even more important) is that the above equation is the partial derivatives of: with respect to our uncertain parameters squared multiplied with their variance!

55 Non-linear combinations (4) This result is very good => an easy way of calculating the variance => the law of error propagation The partial derivatives of become:

56 Non-linear combinations (5) The plot shows the variance of X for the time step 1, …, 20 and as can be noticed the variance (or standard deviation) is constantly increasing.  d = 1/10   = 5/360

57 The Error Propagation Law

58

59

60 Multidimensional Gaussian distributions MGD (1) The Gaussian distribution can easily be extended for several dimensions by: replacing the variance (  ) by a co-variance matrix (  ) and the scalars (x and m X ) by column vectors. The CVM describes (consists of): 1) the variances of the individual dimensions => diagonal elements 2) the co-variances between the different dimensions => off-diagonal elements ! Symmetric ! Positive definite

61 A 1-d Gaussian distribution is given by: An n-d Gaussian distribution is given by:

62 MGD (2) Eigenvalues => standard deviations Eigenvectors => rotation of the ellipses

63 MGD (3) The co-variance between two stochastic variables is calculated as: Which for a discrete variable becomes: And for a continuous variable becomes:

64 MGD (4) - Non-linear combinations The state variables (x, y,  ) at time k+1 become:

65 MGD (5) - Non-linear combinations We know that to calculate the variance (or co-variance) at time step k+1 we must linearize Z(k+1) by e.g. a Taylor expansion - but we also know that this is done by the law of error propagation, which for matrices becomes: With  f X and  f U are the Jacobian matrices (w.r.t. our uncertain variables) of the state transition matrix.

66 MGD (6) - Non-linear combinations The uncertainty ellipses for X and Y (for time step 1.. 20) is shown in the figure.

67 Circular Error Problem If we have a map: We can localize! If we can localize: We can make a map! NOT THAT SIMPLE!

68 Expectation-Maximization (EM) Initialize: Make random guess for lines Repeat: –Find the line closest to each point and group into two sets. (Expectation Step) –Find the best-fit lines to the two sets (Maximization Step) –Iterate until convergence The algorithm is guaranteed to converge to some local optima Algorithm

69 Example:

70

71

72

73 Converged!

74 Probabilistic Mapping E-Step: Use current best map and data to find belief probabilities M-step: Compute the most likely map based on the probabilities computed in the E-step. Alternate steps to get better map and localization estimates Convergence is guaranteed as before. Maximum Likelihood Estimation

75 The E-Step P(s t |d,m) = P(s t |o 1, a 1 … o t,m). P(s t |a t …o T,m)  t  Bel(s t ) Markov Localization  t Analogous to  but computed backward in time

76 The M-Step Updates occupancy grid P(m xy = l | d) = # of times l was observed at # of times something was obs. at

77 Probabilistic Mapping Addresses the Simultaneous Mapping and Localization problem (SLAM) Robust Hacks for easing computational and processing burden –Caching –Selective computation –Selective memorization

78

79

80

81 Markov Assumption Future is Independent of Past Given Current State “Assume Static World”

82 Probabilistic Model Action Data Observation Data

83 Derivation : Markov Localization Bayes Markov Total Probability

84 Mobile Robot Localization Proprioceptive Sensors: ( Encoders, IMU ) - Odometry, Dead reckoning Exteroceptive Sensors: ( Laser, Camera ) - Global, Local Correlation Scan-Matching Scan 1Scan 2 Iterate Displacement Estimate Initial Guess Point Correspondence Scan-Matching Correlate range measurements to estimate displacement Can improve (or even replace) odometry – Roumeliotis TAI-14 Previous Work - Vision community and Lu & Milios [97]

85 1 m x500 Weighted Approach Explicit models of uncertainty & noise sources for each scan point: Sensor noise & errors Range noise Angular uncertainty Bias Point correspondence uncertainty Correspondence Errors Improvement vs. unweighted method: More accurate displacement estimate More realistic covariance estimate Increased robustness to initial conditions Improved convergence Combined Uncertanties

86 Weighted Formulation Error between k th scan point pair Measured range data from poses i and j sensor noise Goal: Estimate displacement (p ij,  ij ) bias true range = rotation of  ij Correspondence Error Noise Error Bias Error

87 LikLik  ll 1)Sensor Noise Covariance of Error Estimate Covariance of error between k th scan point pair = 2)Sensor Bias neglect for now Pose i Correspondence Sensor Noise Bias

88 3)Correspondence Error = c ij k Estimate bounds of c ij k from the geometry of the boundary and robot poses Assume uniform distribution Max error where

89 Finding incidence angles  i k and  j k Hough Transform -Fits lines to range data -Local incidence angle estimated from line tangent and scan angle -Common technique in vision community (Duda & Hart [72]) -Can be extended to fit simple curves Scan Points Fit Lines ikik

90 Likelihood of obtaining errors {  ij k } given displacement Maximum Likelihood Estimation Position displacement estimate obtained in closed form Orientation estimate found using 1-D numerical optimization, or series expansion approximation methods Non-linear Optimization Problem

91 Experimental Results Increased robustness to inaccurate initial displacement guesses Fewer iterations for convergence Weighted vs. Unweighted matching of two poses 512 trials with different initial displacements within : +/- 15 degrees of actual angular displacement +/- 150 mm of actual spatial displacement Initial Displacements Unweighted Estimates Weighted Estimates

92 Unweighted Weighted

93 Displacement estimate errors at end of path Odometry = 950mm Unweighted = 490mm Weighted = 120mm Eight-step, 22 meter path More accurate covariance estimate - Improved knowledge of measurement uncertainty - Better fusion with other sensors

94 Uncertainty From Sensor Noise and Correspondence Error 1 m x500


Download ppt "Probability in Robotics Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since."

Similar presentations


Ads by Google