Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Presented by Xinyu Chang
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Features Induction Moravec Corner Detection
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
IBBT – Ugent – Telin – IPI Dimitri Van Cauwelaert A study of the 2D - SIFT algorithm Dimitri Van Cauwelaert.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Fast High-Dimensional Feature Matching for Object Recognition David Lowe Computer Science Department University of British Columbia.
(1) Feature-point matching by D.J.Duff for CompVis Online: Feature Point Matching Detection,
A Study of Approaches for Object Recognition
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Distinctive Image Feature from Scale-Invariant KeyPoints
Feature extraction: Corners and blobs
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Scale Invariant Feature Transform (SIFT)
Automatic Matching of Multi-View Images
SIFT - The Scale Invariant Feature Transform Distinctive image features from scale-invariant keypoints. David G. Lowe, International Journal of Computer.
1 Invariant Local Feature for Object Recognition Presented by Wyman 2/05/2006.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 3 Advanced Features Sebastian Thrun, Stanford.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Interest Point Descriptors
Computer vision.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Distinctive Image Features from Scale-Invariant Keypoints Ronnie Bajwa Sameer Pawar * * Adapted from slides found online by Michael Kowalski, Lehigh University.
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
CSE 185 Introduction to Computer Vision Feature Matching.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Distinctive Image Features from Scale-Invariant Keypoints
Scale Invariant Feature Transform (SIFT)
Presented by David Lee 3/20/2006
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
Blob detection.
SIFT.
Visual homing using PCA-SIFT
SIFT Scale-Invariant Feature Transform David Lowe
CS262: Computer Vision Lect 09: SIFT Descriptors
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Scale Invariant Feature Transform (SIFT)
SIFT paper.
TP12 - Local features: detection and description
Feature description and matching
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
From a presentation by Jimmy Huff Modified by Josiah Yoder
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
SIFT.
CSE 185 Introduction to Computer Vision
Feature descriptors and matching
SIFT SIFT is an carefully designed procedure with empirically determined parameters for the invariant and distinctive features.
Presented by Xu Miao April 20, 2005
Presentation transcript:

Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos and Robert van der Linden

Organization Introduction Keypoint extraction Applications

Introduction Matching images across affine transformation: Change in lighting and 3D viewpoint:

Introduction Motion tracking Object and scene recognition Stereo correspondence

Extracting features Extrema detection Keypoint localization Orientation assignment Local image descriptor

Extrema detection Blur copies of the image with broadening Gaussian filters.

Extrema detection Subtract these (DoG) to find local extrema.

Extrema detection Calculate the DoGs for different gaussians. 2 x

Extrema detection Calculate the DoGs for different gaussians. 2 x

Extrema detection Blur

Keypoint localization Select keypoints that are higher or lower than their 26 neighbours.

Keypoint localization Reject all points where the contrast is too low.

Keypoint localization Reject all points that lie on an edge.

Effects of this elimination Extrema detection

Effects of this elimination Contrast check

Effects of this elimination Edge check

Extracting features Extrema detection Keypoint localization Orientation assignment Local image descriptor

Orientation assignment Assign an orientation to a keypoint to make its descriptor invariant to rotation

Orientation assignment The orientation of a keypoint is determined in four steps: 1.Determine sample points 2.Determine the gradient magnitude and orientation of each sample point 3.Create an orientation histogram of the sample points 4.Extract the dominant directions from the histogram

Step 1: Determine sample points The source image is the Gaussian smoothed image with the closest scale Use all pixels within a certain radius Actual scale Used Gaussian

Step 2: Determine gradient magnitude and orientation of each sample point Gradient magnitude: Gradient orientation:

Step 2: Determine gradient magnitude and orientation of each sample point Gradient magnitude: Gradient orientation: pixel

Step 3: Create an orientation histogram The histogram has 36 bins, each covering 10 degrees Each sample is weighted its gradient magnitude and a Gaussian weighted circular window

Step 4: Extract dominant directions Take the peak(s) from the orientation histogram Use all peaks greater than 80% of the highest peak Every direction gets its own keypoint

The Local image descriptor Every keypoint now has a location, scale and orientation, from which a repeatable 2D grid can be determined We want distinctive descriptor vectors, partially invariant to illumination and viewpoint changes

Computing the Local image descriptor Take the 16 x 16 sample array around the keypoint Compute 4 x 4 orientation histograms from this array Use 8 bins per histogram: 4x4x8=128 features

Local image descriptor optimizations Normalize the obtained feature vector to enhance invariance to illumination changes Reduce the influence of large gradient magnitudes by capping the normalized features to 0.2 Normalize again

Possible applications for SIFT We have a feature extraction method which yields useful keypoints, what's next? Some appications: Object recognition in images Panorama stitching 3D scene modelling 3D human action tracking (for example for security surveillance)‏ Robot localisation and mapping

Panorama stitching

Brown, ICCV 2003 Panorama stitching

(from Sudderth et al., 2006)‏ 3D modelling

Application: SIFT to object recognition We can applicate SIFT to recognize objects in images. Say, we have an image which contains an object. How to recognize? Key idea: Compare keypoints, if these are similar it is likely that it is the same object. First problem: a lot of features arise from background clutter. How to remove these? Possible approach: - Look for clusters of matching features - Look for distance of closest match to the second- closest match

Efficiently locating the nearest neighbour 128 dimensional feature vector for each keypoint: no search optimization possible, no better way to find the nearest neighbour than exhaustive search. But: only 3 features are enough to locate objects, for example when occluded. Hough Transform method is used to describe clusters of keypoints as shapes and let them 'vote' for the pose of an object, described in location, orientation and scale.

Application: robot vision, localization and mapping Se, S. Lowe, D. G. Little, J. Vision-based Mobile Robot Localization And Mapping using Scale- Invariant Features, 2001  Application of SIFT to mobile robotics  SIFT features combined with Simultaneous Localization And Map Building (SLAMB)‏  Recognizing landmarks: estimation of the  10m by 10m lab, 3000 features collected  Preliminary results: quite good

Conclusions from the paper The keypoints SIFT extracts are indeed invariant to image rotation, scale and robust to affine distortion, noise and change in illumination. SIFT can be optimized to run real-time. The proposed approach (SIFT combined with Hough transform for object recognition) has shown to work reliably.

Discussion Is the SIFT method for keypoint extraction the best way to get distinctive features from images? Is SIFT biologically plausible? Is it important to have biologically inspired methods in object recognition / localization?

References Main article: Distictive Image Features from Scale-Invariant Keypoints, D. G. Lowe. International Journal of Computer Vision 60, , Other articles: Depth from Familiar Objects: A Hierarchical Model for 3D Scenes, Sudderth et al, Proceedings of the 2006 IEEE Conference on computer vision and pattern recognition, volume II, , Vision-based Mobile Robot Localization And Mapping using Scale-Invariant Features, Se, S. Lowe, D. G. Little, J., 2001