+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper

Slides:



Advertisements
Similar presentations
Odometry Error Modeling Three noble methods to model random odometry error.
Advertisements

Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Silvina Rybnikov Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin HomePage:
(Includes references to Brian Clipp
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Simultaneous Localization & Mapping - SLAM
Lecture 8: Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
1 Formation et Analyse d’Images Session 6 Daniela Hall 18 November 2004.
1 Formation et Analyse d’Images Session 12 Daniela Hall 16 January 2006.
Structure from motion.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Active Simultaneous Localization and Mapping Stephen Tully, : Robotic Motion Planning This project is to actively control the.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Multiple-view Reconstruction from Points and Lines
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
SLAM: Simultaneous Localization and Mapping: Part II BY TIM BAILEY AND HUGH DURRANT-WHYTE Presented by Chang Young Kim These slides are based on: Probabilistic.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Kalman filter and SLAM problem
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
/09/dji-phantom-crashes-into- canadian-lake/
3D SLAM for Omni-directional Camera
(Wed) Young Ki Baik Computer Vision Lab.
Young Ki Baik, Computer Vision Lab.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh.
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
COS429 Computer Vision =++ Assignment 4 Cloning Yourself.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Computational Vision CSCI 363, Fall 2012 Lecture 17 Stereopsis II
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Localization Life in the Atacama 2004 Science & Technology Workshop January 6-7, 2005 Daniel Villa Carnegie Mellon Matthew Deans QSS/NASA Ames.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
CSE-473 Mobile Robot Mapping.
Paper – Stephen Se, David Lowe, Jim Little
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Simultaneous Localization and Mapping
دکتر سعید شیری قیداری & فصل 4 کتاب
Common Classification Tasks
Multiple View Geometry for Robotics
Binocular Stereo Vision
Probabilistic Robotics
Binocular Stereo Vision
Binocular Stereo Vision
Probabilistic Map Based Localization
Detecting image intensity changes
Course 6 Stereo.
Simultaneous Localization and Mapping
Lecture 15: Structure from motion
Presentation transcript:

+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little + Presented by Matt Loper CS296-3: Robot Learning and Autonomy Brown University April 4th, 2007

Objective SLAM: Simultaneous Localization and Mapping Inputs: odometry and sensor readings, over time Outputs: a map, and the set of locations traversed on it Uncertainty modeling... on the inputs is essential, since sensors and odometry are noisy in the output is great if you can do it (this paper does). Matthew Loper

Overview SIFT Stereo Ego-motion estimation: improving on odometry Landmark Tracking: remembering what we've seen First Results Heuristic improvements: tricks to improve success Error Modeling More Results Matthew Loper

SIFT Stereo Sensors: typical sensors include... Laser Sonar Vision, with an instrumented environment Instead, this paper uses the Digiclops, a trinocular camera rig Matthew Loper

SIFT Stereo Goal: find features in images, and compute their 3D relative position Three Steps: Detect unique features in each of 3 cameras Match across cameras Recover feature positions in 3D, relative to camera Matthew Loper

SIFT Stereo Step 1 of 3: Detect unique features in each of 3 cameras SIFT allows detection and characterization of image features Matthew Loper

SIFT Stereo Step 2 of 3: Match features across cameras Given our three input images, we could just match all features on each against all others But we can constrain matches further: Epipolar constraint Disparity constraint: for a rightmost camera, objects should appear more to the left Orientation, scale should be about the same in the two images Should be only one good match Matthew Loper

SIFT Stereo Constraint demonstration Left Camera Right Camera Matthew Loper

SIFT Stereo Step 3 of 3: Recover feature positions in 3D, relative to camera Approach #1: what they did The size of the disparity gives the distance We have two disparities, so we could take the average Approach #2: We could project rays through the pixels, and find the closest intersection Matthew Loper

SIFT Stereo Note vertical and horizontal disparities Left Camera Right Camera Matthew Loper

Ego-motion estimation Odometry gives us change in position/orientation Let's refine these moment-to-moment estimations of change in position/orientation Note: We will still have the problem of slowly-degrading sums of differences! It will just be less extreme Matthew Loper

Ego-motion estimation Find matches across two frames Just as we had constraints between two cameras, we can have constraints between two frames position, scale, orientation, disparity Given >= 3 matches, we can solve for change in camera position/orientation Given odometry as initializer, minimize reprojection error (could use, for example, Matlab's lsqnonlin) Matthew Loper

Landmark Tracking Let's turn SIFT features into landmarks For each 3D SIFT feature found, let's associate some data with it.... X,Y,Z: 3D position in the real world s: scale of landmark in real world o: orientation of landmark in real world (from top down) l: count of number of times this landmark has been unseen, when it should have been seen. This is like a measure of error for this landmark. Matthew Loper

Landmark Tracking Rules for landmarks: If a landmark is expected to be seen... ...but is not seen, that landmark's “missed” count goes up by one ...and is seen, reset its miss count to zero If a new landmark is found, add it to the database and set its missed count to zero If “missed” count goes above 20, remove landmark Matthew Loper

First results Generated map Starting point at cross Path on dotted line Come back to starting point, and measure error That error is very small Matthew Loper

Heuristic improvements Only insert new landmarks if they've been seen a few frames Build “permanent” landmarks when the environment is clear Associate view vector with each landmark If we're “behind” the landmark, or more than 20 degrees to the side of it, don't expect to see it Allow multiple features per landmark (ie multiple scales, orientations, and view vectors) Matthew Loper

Error Modeling: Robot Position Uses a Kalman filter During each frame, we should maintain... X: State P: Uncertainty over that state State: position and angle, ie... position: x,y angle: yaw, pitch, roll Uncertainty over that state: a covariance matrix P (ie ellipsoidal Gaussian) Matthew Loper

Error Modeling: Robot Position xLS is pose predicted by SIFT stereo PLS is covariance over this prediction, computed as inverse of JTJ Matthew Loper

Error Modeling: Landmarks Each landmark has a covariance matrix Gets updated on a frame-by-frame-basis Math is in the paper Matthew Loper

More results Just spinning around Matthew Loper

More results Going in a path around the room Matthew Loper

More results Uncertainty over robot position Matthew Loper