Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,

Slides:



Advertisements
Similar presentations
Epipolar Geometry.
Advertisements

The fundamental matrix F
Lecture 11: Two-view geometry
CSE473/573 – Stereo and Multiple View Geometry
3D reconstruction.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
Geometry 2: A taste of projective geometry Introduction to Computer Vision Ronen Basri Weizmann Institute of Science.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-View Geometry CS Sastry and Yang
1 pb.  camera model  calibration  separation (int/ext)  pose Don’t get lost! What are we doing? Projective geometry Numerical tools Uncalibrated cameras.
Silvina Rybnikov Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin HomePage:
Two-view geometry.
Camera calibration and epipolar geometry
Structure from motion.
Image Correspondence and Depth Recovery Gene Wang 4/26/2011.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Epipolar Geometry and the Fundamental Matrix F
Lecture 21: Multiple-view geometry and structure from motion
Multiple-view Reconstruction from Points and Lines
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
Projected image of a cube. Classical Calibration.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Automatic Camera Calibration
Computer vision: models, learning and inference
CSE 185 Introduction to Computer Vision
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Metrology 1.Perspective distortion. 2.Depth is lost.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
Binocular Stereo #1. Topics 1. Principle 2. binocular stereo basic equation 3. epipolar line 4. features and strategies for matching.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
1 Camera calibration based on arbitrary parallelograms 授課教授:連震杰 學生:鄭光位.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Feature Matching. Feature Space Outlier Rejection.
Cherevatsky Boris Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Computational Rephotography Soonmin Bae Aseem Agarwala Frédo Durand.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Multiple View Geometry
René Vidal and Xiaodong Fan Center for Imaging Science
The Brightness Constraint
Epipolar geometry.
3D Photography: Epipolar geometry
The Brightness Constraint
Uncalibrated Geometry & Stratification
Two-view geometry.
Multi-view geometry.
Calibration and homographies
Presentation transcript:

Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour, Prof. Ehud Rivlin Intelligence Systems Laboratory, CS, Technion.

The Problem Robot navigation to a given target using only visual tools, in an unknown indoor planar environment. The target is given as an image, taken from the target pose.

The Method At each step, a picture is captured from the current position of the robot. The new picture is being compared to the target picture. The Algorithm computes the rotation and translation to the target pose

What Makes it So Unique We use a special structure of the Essential Matrix for a planar scene. Using only 2-point correspondence Precision and numerical stability

Our Components

Background Epipolar Geometry Epipolar Point Epipolar Line Homography Essential Matrix

Epipolar Geometry

Center Of Projection

Epipolar Point

Epipolar Lines

Homography 3x3 matrix. Describes the relation between a plane and an image of the plane. One image – Warping and rotating the image, to be correlated with the plane. Two images – Describes how 2 images are related to one another, by mapping epipolar lines.

Essential Matrix 3x3 matrix. Relates corresponding points in two images. Calibrated version of the Fundamental matrix., where P ’,P are two points in the images. Useful for determining the translation and rotation between two images. The rotation matrix can be fully recovered. Translation up to a scale

Movement is limited to plane Rotation around Y axis only Rotation matrix & Translation vector - Essential Matrix in Planar Model

Algorithm & Implementation

Program Workflow Input : Target image. 1.Capture an image from the current pose. 2.Extract and match features from both images. 3.Filter the largest inliers set using PROSAC. 4.Using the best model estimated, extract matrices R and T. 5.Calculate the distance to target. 6.Move to a new pose and repeat step 1 until target pose is reached.

Finding a good starting point – Scanning and covering 360° of the room. Finding correspondence – Using SIFT method and Brute force matching Eliminate bad matches Starting position & correspondence

Estimating E Estimating the Essential Matrix Only 2 point correspondence to estimate E. Using a variation of PROSAC. Finding the largest inlier set that agree with E.

Estimating E (Cont.) Estimating the Essential Matrix (cont.) Linear equation matrix Ax = 0: Where Constrained version

Estimating E (Cont.) Estimating the Essential Matrix (cont.) Unit circle and ellipse intersection Using SVD. Possible results: 2 (or 4) intersection points - use for our E. No intersection points – no solution. Infinite number of points – Pure Rotation.

What’s the next step? Pure Rotation – Robot at target, but wrong orientation. Calculating Distance With the help of a third image. 2 Methods implemented: Triangle Cross ratio

Design and Implementation Using ROS nodes to communicate between modules, the robot and the camera. Auto Navigator – main module. Robot Controller Image Capturer + Undistorted Epipolar Solver

Auto Navigator Interface Diagram

Using ROS Based on Nodes – services. subscribe or publish from\to a Topic. ROSARIA – controlling the robot. Axis Camera – photo stream, capturing images.

Challenges Eliminating bad images Algorithm divergence near target (Triangle method) Computing distance to target Restarting after bad step Algorithm troubles with a dominant plane

Experimental Test Results Convergence : 3-8 iterations, 1-20 cm from target. Triangle method algorithm diverges when close to target. Comparing distance calculation method

Target Image

Starting Position

Midway – 2 nd iteration

The End

Target VS. The End TargetEnd

Measurables Distance to target by iterations

Measurables

Summary Vision based navigation can deduce orientation and distance using: Feature matching Essential Matrix Planar movement Unique Essential Matrix structure. 2-point correspondence. The algorithm quickly converges close to the target – pose + orientation.

Future Work Solving divergence at target pose. Better elimination of “bad” image sources (wrong poses). Qualifier for better matches and features for a more efficient estimation each step. Solve issue with dominant plane scene