Large Scale Navigation Based on Perception Maria Joao Rendas I3S, CNRS-UNSA.

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Learning deformable models Yali Amit, University of Chicago Alain Trouvé, CMLA Cachan.
David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics.
Adapting Ocean Surveys to the Observed Fields Characteristics Maria-João Rendas I3S, CNRS-UNSA.
Introduction To Tracking
TNO orbit computation: analysing the observed population Jenni Virtanen Observatory, University of Helsinki Workshop on Transneptunian objects - Dynamical.
Probabilistic Robotics
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Robot Localization Using Bayesian Methods
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Nice, 17/18 December 2001 Adaptive Grids For Bathymetry Mapping And Navigation Michel Chedid and Maria-João Rendas I3S - MAUVE.
Visual Recognition Tutorial
PROVIDING DISTRIBUTED FORECASTS OF PRECIPITATION USING A STATISTICAL NOWCAST SCHEME Neil I. Fox and Chris K. Wikle University of Missouri- Columbia.
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
Nice, 17/18 December 2001 Autonomous mapping of natural fields using Random Closed Set Models Stefan Rolfes, Maria Joao Rendas
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Amos Storkey, School of Informatics. Density Traversal Clustering and Generative Kernels a generative framework for spectral clustering Amos Storkey, Tom.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Helsinki University of Technology Adaptive Informatics Research Centre Finland Variational Bayesian Approach for Nonlinear Identification and Control Matti.
Kalman filter and SLAM problem
Markov Localization & Bayes Filtering
Bayesian parameter estimation in cosmology with Population Monte Carlo By Darell Moodley (UKZN) Supervisor: Prof. K Moodley (UKZN) SKA Postgraduate conference,
Statistical Decision Theory
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Particle Filters.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
Learning to Navigate Through Crowded Environments Peter Henry 1, Christian Vollmer 2, Brian Ferris 1, Dieter Fox 1 Tuesday, May 4, University of.
CS Statistical Machine learning Lecture 24
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
Ch.9 Bayesian Models of Sensory Cue Integration (Mon) Summarized and Presented by J.W. Ha 1.
Probabilistic Robotics
Autonomous Robots Robot Path Planning (3) © Manfred Huber 2008.
Tracking with dynamics
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Nonlinear State Estimation
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Statistical environment representation to support navigation of mobile robots in unstructured environments Sumare workshop Stefan Rolfes Maria.
Lecture 1.31 Criteria for optimal reception of radio signals.
Particle Filtering for Geometric Active Contours
Probabilistic Robotics
Markov ó Kalman Filter Localization
Robust Belief-based Execution of Manipulation Programs
Filtering and State Estimation: Basic Concepts
Statistical environment representation to support navigation of mobile robots in unstructured environments Stefan Rolfes Maria Joao Rendas
Probabilistic Map Based Localization
Principle of Bayesian Robot Localization.
Outline Texture modeling - continued Markov Random Field models
Presentation transcript:

Large Scale Navigation Based on Perception Maria Joao Rendas I3S, CNRS-UNSA

Problem Control the motion of a robot operating in an open region without a priori (or little) knowledge about the region –no need for pre-mission preparation with no global positioning –no returns to surface (stealth...) without getting lost –guaranteeing the return to a pre-specified (homing) region

Approach Map along: identify a set of relevant features and regularly reset the positioing error by returning to them Exploit if the nominal plan takes the robot along uninformative (homogenous) regions, exploit neighbouring regions searching for more features & Quit if need refuse to execute a mission if it may put the robot security at risk

Underwater environment (good) features are rare (widely spaced apart) unstable similar to each other robot’s sensors are myopic no continuous perceptual guidance, high ambiguity situations

Previous work I3S Architecture based on the definition of (Semi) Markov Decision process (corresponding to a partition of the configuration state of the robot determined by the discrete environment features) extensive use of statistical signal processing and of the theory of (sample path properties) of Markov processes to characterise the transition density of the chain guidance conditioned by the current probability of getting lost (absorbing state of the Markov chain) (mostly with terrestrial robots)

Illustration

Underlying Tools uncertainty characterisation –present and future states –probability of absorption in the lost state –ambiguity mapping –update a manageable representation of the features (contours) exploitation / observation strategies & behaviours –search for and acquisition of features

Major limitations Strong Markov property: requires full identifiability of the reached object –architecture is based on an upated state estimate, with an associated uncertainty around it : contradicts the ambiguous nature of the environment and the possibility of large positioning errors. Assumption of existence of discrete bounded features –natural environments are mostly of a continuous nature: more often continuous than discontinuous –features can be unbound: when to stop observing? How much is enough?

Work in progress Makes full use of a Bayesian approach: Ambiguity: propagate an higher-order approximation to the pdf of the robot state Environment representation: instead of the location of individual features, learn a model of their spatial distribution and shape attributes (presentation by Stefan Rolfes) Guidance/exploration : explicitly incorporate the learned model of the environment in the cost functional of the state controller.

Ambiguity problem In large scale environments (& with myopic sensors...) each single feature may be (locally) indistinguishable from another one Common control architecture are based on a single state estimate, obtained with Extended Kalman filters: wrong associations of measures to features lead to divergence of the filter and may lead to robot loss. –approximate the pdf of the state configuration (given the observations) by a mixture of Gauss kernels efficient implementation (bank of dynamically updated EKFs) convenient representation of truly ambiguous solutions (multi-modal pdfs) –use it to characterise a (partially observable) Markov chain, which is the adequate tool to chose optimal disambiguating manoeuvers.

Exploration / observation Use learned (spatial) statistical model to drive the robot to the most informative regions of the workspace (those that are, with high probability, more relevant with respect to its goals) –Case study: acquisition of current maps (in cooperation with MUMM)

Problem: observation of natural (oceanic) parameters in extended areas Common survey strategy ? Guidance by prior information

Goal: efficiently use statistical knowledge about the observed field (which constrains the possible set of actually occuring field patterns) Efficiency gain comes from being able to extrapolate across spatial regions, and to direct the sensor to the most informative regions ?

41 maps (15 x 22 grid) provided by MUMM (Brussels, Belgium) 10 maps reserved for testing PRIOR KNOWLEDGE Problem: map a natural field (currents in the mouth of the river Rhone) Framework: Bayesian (use prior knowledge to characterize the set of possible observed maps)

Geometric model Use singular value decomposition M=[col(m 1 ) col(m 2 ) … col(m 41 ) ] In our case we retain L=28 singular vectors of M c = V  + U  V T U=0 Statistical model  : N(  , diag( i ))  : N(0, L+1 I) c: N(V   V T, V   V  +   )

Such a model allows extrapolation of local observations: + Maximum a posteriori estimate z = S c + n S (the observation points) can be chosen to optimise performance

If only a specific feature is of interest its uncertainty can be computed, and the vehicle guided in order to optimise its observation accuracy INFORMATION GUIDANCE perception driven Example map the line of constant current intensity ||c||=C te

Local minimax criterion: optimize the accuracy of the worst estimated neighboor contour point

Approach combining on-line sensor guidance with prior statistical models providing the ability to extrapolate local observations to unobserved regions and the determination of the points more informative with respect to the features of interest. Future work drop constraint on observed points (presently in the same grid as the learning maps) consider the effect of positioning errors consider other types of fields (random closed set models)