An Introduction to Mobile Robotics CSE350/450-011 Sensor Systems (Continued) 2 Sep 03.

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Advanced Mobile Robotics
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
(Includes references to Brian Clipp
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
The Pinhole Camera Model
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
CSE473/573 – Stereo Correspondence
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Overview and Mathematics Bjoern Griesbach
Real-Time Vision on a Mobile Robot Platform Mohan Sridharan Joint work with Peter Stone The University of Texas at Austin
Automatic Camera Calibration
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.2: Sensors Jürgen Sturm Technische Universität München.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments D. Grießbach, D. Baumbach, A. Börner, S. Zuev German Aerospace Center.
Perception Introduction Pattern Recognition Image Formation
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Autonomous Mobile Robots CPE 470/670 Lecture 6 Instructor: Monica Nicolescu.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Plane-based external camera calibration with accuracy measured by relative deflection angle Chunhui Cui , KingNgiNgan Journal Image Communication Volume.
stereo Outline : Remind class of 3d geometry Introduction
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
Camera Model Calibration
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
CMSC5711 Image processing and computer vision
Paper – Stephen Se, David Lowe, Jim Little
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
دکتر سعید شیری قیداری & فصل 4 کتاب
Common Classification Tasks
Day 32 Range Sensor Models 11/13/2018.
Vehicle Segmentation and Tracking in the Presence of Occlusions
CMSC5711 Image processing and computer vision
Probabilistic Robotics
Distance Sensor Models
Filtering Things to take away from this lecture An image as a function
Probabilistic Map Based Localization
Filtering An image as a function Digital vs. continuous images
Presentation transcript:

An Introduction to Mobile Robotics CSE350/ Sensor Systems (Continued) 2 Sep 03

Objectives for Today Any Questions? Finish discussion of inertial navigation Brief review on discrete/numerical integration Review of TOF sensors A few thoughts on modeling sensor noise and the Gaussian distribution

Assumptions in our Inertial Navigation Model Coriolis effects negligible Flat earth No effects from vehicle vibration “Perfect” sensor orientation –With respect to the vehicle –With respect to the earth’s surface

Transforming Accelerations into Position Estimates In a perfect world It’s not a perfect world. We have noise and bias in our acceleration measurements: As a result ERROR TERMS errata

But what about Orientation? In a perfect world: It’s not a perfect world. We have noise and bias in our gyroscopic measurements: As a Result:

From Local Sensor Measurements to Inertial Frame Position Estimates E N Local frame is attached to the SENSOR IN THE PLANE Inertial Frame is FIXED x y

The Impact of Orientation Bias Ignoring noise: Let’s assume that our sensor frame is oriented in an eastwardly direction, and ω=0 ERROR SCALES CUBICLY!!!

Inertial Navigation Strategy Noise & bias cannot be eliminated Bias in accelerometers/gyros induces errors in position that scale quadratically/cubicly with time Bias impact can be reduced through frequent recalibrations to zero out current bias Bottom line: –Inertial navigation provide reasonable position estimates over short distances/time periods –Inertial navigation must be combined with other sensor inputs for extended position estimation QUESTION: How do we perform the integrations with a discrete sensor?

Time-of-Flight Sensors Ultrasonic (aka SONAR) Emits high-frequency sound and receiver captures echo Rigidly mounted to provide distance at a fixed relative bearing Inexpensive and lightweight Range to ≈ 10 meters Error ≈ 2% Potential error sources –Specular reflection –Crosstalk –Multi-path iRobot ® B21R Polaroid TM Transducer

SICK ® LMS-200 Time-of-Flight Sensors Laser Range Finders (LRF) “Most accurate” exteroceptive sensor available Relies on detecting the backscatter from a pulsed IR laser beam Range 80 meters 180 o degree scans at 0.25 o resolution Error: 5mm SD at ranges < 8 meters Negatives –Weight ≈ 10 pounds –Cost ≈ $5K –Power consumption ≈ 20W/160W –Difficulty in detecting transparent or dark matte surfaces

Modeling Sensor Noise Some Initial Ideas Assume that we can remove sensor bias through calibration All sensor measurements are still wrong, as they are corrupted by random sensor noise Goal: Develop algorithms which are robust to sensor noise Problem: How do we model if its distribution is unknown?

Modeling Sensor Noise Some Initial Ideas Some Possible Solutions: –Collect empirical data and develop a consistent model –Gaussian assumption v ~ N(μ,σ 2 )

Some Other Definitions Population mean: Population mean is also referred to as the expected value for the distribution

Some Other Definitions (cont’d) Population Variance: σ is referred to as the Standard Deviation, and is always positive

Why a Gaussian? Central Limit Theorem Mathematical convenience –Gaussian Addition distribution –Gaussian Subtraction Distribution –Gaussian Ratio Distribution –“Invariance” to Convolution –“Invariance” to Linear Transformation Empirical Data Experimental Support “It’s the normal distribution”

The “Standard” 2-D Gaussian This formula is only valid when the principle axes of the distribution are aligned with the x-y coordinate frame (more later) QUESTION: Assuming that our accelerometers are corrupted by Gaussian noise, would we expect the distribution for position to be Gaussian as well?

So how can I sample a Gaussian distribution in Matlab? randn function >> help randn RANDN Normally distributed random numbers. RANDN(N) is an N-by-N matrix with random entries, chosen from a normal distribution with mean zero, variance one and standard deviation one. >> x = randn x = >> x = randn(2,3) x = QUESTION: How do I generate a general 1D Gaussian distribution from this?

A (very) Brief Overview of Computer Vision Systems Cameras are the natural extension of biological vision to robotics Advantages of Cameras –Tremendous amounts of information –Natural medium for human interface –Small Size –Passive –Low power consumption Disadvantages of Cameras –Explicit estimates for parameters of interests (e.g. range, bearing, etc.) are computationally expensive to obtain –Accuracy of estimates strongly tied to calibration –Calibration can be quite cumbersome Pulnix tm & Pt Grey tm Cameras

Sample Robotics Application Obstacle Avoidance

Single Camera System After an appropriate calibration, every pixel can be associated with a unique ray in space with an associated azimuth angle θ, and elevation angle φ An individual camera provides NO EXPLICIT DISTANCE INFORMATION Perspective Camera Model CCD

(x i R,y i R ) Optical Center f CCD Stereo Vision Geometry (X,Y,Z) Optical Center CCD B = Baseline (x i L,y i L ) x Z f=focal length B/2

Stereo Geometry (cont’d) Left ImageRight Image disparity = (x i R – x i L ) (x i L,y i L )(x i R,y i R ) (X,Y,Z) NOTE: This formulation assumes that the two images are already rectified.

* Images from Sample Stereo Reconstruction From Point Grey Bumblebee TM Camera

Next Time… How do we extract features from images –Edge Segmentation –Color Segmentation –Corner Extraction