Lecture 9: Behavior Languages

Slides:



Advertisements
Similar presentations
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Advertisements

Lecture 8: Three-Level Architectures CS 344R: Robotics Benjamin Kuipers.
(Includes references to Brian Clipp
Observers and Kalman Filters
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Dimitrios Hristu-Varsakelis Mechanical Engineering and Institute for Systems Research University of Maryland, College Park
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Simulation Modeling and Analysis Session 12 Comparing Alternative System Designs.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
3-1 Introduction Experiment Random Random experiment.
Gaussians Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Lecture II-2: Probability Review
Modern Navigation Thomas Herring
Standard error of estimate & Confidence interval.
Separate multivariate observations
Sampling Distributions  A statistic is random in value … it changes from sample to sample.  The probability distribution of a statistic is called a sampling.
Review of Probability.
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
Investment Analysis and Portfolio management Lecture: 24 Course Code: MBF702.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Physics 270 – Experimental Physics. Standard Deviation of the Mean (Standard Error) When we report the average value of n measurements, the uncertainty.
Sampling and sampling distibutions. Sampling from a finite and an infinite population Simple random sample (finite population) – Population size N, sample.
The Mean of a Discrete RV The mean of a RV is the average value the RV takes over the long-run. –The mean of a RV is analogous to the mean of a large population.
Introduction to Biostatistics and Bioinformatics Exploring Data and Descriptive Statistics.
Lecture 2 Basics of probability in statistical simulation and stochastic programming Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius,
MA5296 Lecture 1 Completion and Uniform Continuity Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2 Singapore.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
1 Sample Geometry and Random Sampling Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Risk Analysis & Modelling Lecture 2: Measuring Risk.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
General ideas to communicate Dynamic model Noise Propagation of uncertainty Covariance matrices Correlations and dependencs.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Statistical Significance Hypothesis Testing.
Review of statistical modeling and probability theory Alan Moses ML4bio.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Objectives: Normal Random Variables Support Regions Whitening Transformations Resources: DHS – Chap. 2 (Part 2) K.F. – Intro to PR X. Z. – PR Course S.B.
Geology 6600/7600 Signal Analysis 04 Sep 2014 © A.R. Lowry 2015 Last time: Signal Analysis is a set of tools used to extract information from sequences.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate accuracy.
Big Data Infrastructure Week 9: Data Mining (4/4) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.
Uncertainty and Error Propagation
Chapter 3: Maximum-Likelihood Parameter Estimation
ANALYSIS OF SEQUENTIAL CIRCUITS
LECTURE 33: STATISTICAL SIGNIFICANCE AND CONFIDENCE (CONT.)
Basic simulation methodology
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
CH 5: Multivariate Methods
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Statistical Models for Automatic Speech Recognition
Sample Mean Distributions
Synaptic Dynamics: Unsupervised Learning
Lecture 10: Observers and Kalman Filters
Machine Learning Math Essentials Part 2
Matrix Algebra and Random Vectors
LECTURE 15: REESTIMATION, EM AND MIXTURES
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Multivariate Methods Berlin Chen
Central Limit Theorem: Sampling Distribution.
Multivariate Methods Berlin Chen, 2005 References:
Uncertainty and Error Propagation
Probabilistic Surrogate Models
Presentation transcript:

Lecture 9: Behavior Languages CS 344R: Robotics Benjamin Kuipers

Alternative Approaches To Sequencers Roger Brockett, MDL Hristu-Varsakelis & Andersson, MDLe. Jim Firby, RAPS … there are others … The right answer is not completely clear.

Motion Description Languages Problem: Describe continuous motion in a complex environment as a finite set of symbolic elements. Applicability = sequencing Termination = condition or time-out. Roger Brockett defined MDL. Extended to MDLe by Manikonda, Krishnaprasad, and Hendler.

This is an instance of our framework for control laws A local control law is a triple: A, Hi,  Applicability predicate A(y) Control policy u = Hi(y) Termination predicate (y)

The Kinetic State Machine The MDLe state evolution model is: This is an instance of our general model There is also: a set of timers Ti; a set of boolean features i(y) U(t, x) is a general control law which can be suspended by the timer Ti or the interrupt i(y)

The Kinetic State Machine

Q: What is the role of G(x)? In the state evolution model x is in Rn. Motor vector U(t,x) is in Rk. G is an nk matrix whose columns gi are vector fields in Rn. Each column represents the effect on x of one component of the motor vector.

MDL Programs The simplest MDL program is an atom To run an atom, apply U to the kinetic state machine model, until the interrupt function (y) goes false, or until T units of time elapse.

Compose Atoms to Behaviors Given atoms Define the behavior Which means to do the atoms sequentially until the interrupt b or time-out Tb occurs. Behaviors nest recursively to make plans.

Example Interrupts (bumper) (wait T) (atIsection b) b specifies 4 bits: whether obstacle is required (front, left, back, right). Interrupt occurs when a location of that structure is detected.

Example Atoms (Atom interrupt_condition control_law) (Atom (wait ) (rotate )) (Atom (bumper OR atIsection(b)) (go v, )) (Atom (wait T) (goAvoid , kf, kt)) (Atom (ri(t)==rj(t)) (align ri rj)) Select ideas from here for your controllers. goAvoid moves in direction \psi, with gains k_f and k_t on the forward and turn controllers, responding to distances to obstacles.

Environment Model A graph of local maps. We will study local metrical maps later. Likewise topological maps. Edges in the graph represent behaviors. Compact and effective: Local metrical maps are reliable. Describe geometry only where necessary.

Experiment They built a model of three places in their laboratory. They demonstrated MDLe plans for travel between pairs of places.

Limitations Simple sequential FSM model. Limited evaluation: No parallelism or combination of control laws. No success/failure exits from control laws. Much can pack into the interrupt conditions. Limited evaluation: No exploration or learning. No test of reliability.

Next: Observers Probabilistic estimates of the true state, given the observations. Basic concepts: Probability distribution; Gaussian model Expectations

Estimates and Uncertainty Conditional probability density function

Gaussian (Normal) Distribution Completely described by N(,) Mean  Standard deviation , variance  2

The Central Limit Theorem The sum of many random variables with the same mean, but with arbitrary conditional density functions, converges to a Gaussian density function. If a model omits many small unmodeled effects, then the resulting error should converge to a Gaussian density function.

Expectations Let x be a random variable. The expected value E[x] is the mean: The probability-weighted mean of all possible values. The sample mean approaches it. Expected value of a vector x is by component.

Variance and Covariance The variance is E[ (x-E[x])2 ] Covariance matrix is E[ (x-E[x])(x-E[x])T ]

Covariance Matrix Along the diagonal, Cii are variances. Off-diagonal Cij are essentially correlations.

Independent Variation x and y are Gaussian random variables (N=100) Generated with x=1 y=3 Covariance matrix:

Dependent Variation c and d are random variables. Generated with c=x+y d=x-y Covariance matrix: