Download presentation
Presentation is loading. Please wait.
Published byJohn Norris Modified over 9 years ago
1
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos and Robert van der Linden
2
Organization Introduction Keypoint extraction Applications
3
Introduction Matching images across affine transformation: Change in lighting and 3D viewpoint:
4
Introduction Motion tracking Object and scene recognition Stereo correspondence
5
Extracting features Extrema detection Keypoint localization Orientation assignment Local image descriptor
6
Extrema detection Blur copies of the image with broadening Gaussian filters.
7
Extrema detection Subtract these (DoG) to find local extrema.
8
Extrema detection Calculate the DoGs for different gaussians. 2 x
9
Extrema detection Calculate the DoGs for different gaussians. 2 x
10
Extrema detection Blur
11
Keypoint localization Select keypoints that are higher or lower than their 26 neighbours.
12
Keypoint localization Reject all points where the contrast is too low.
13
Keypoint localization Reject all points that lie on an edge.
14
Effects of this elimination Extrema detection
15
Effects of this elimination Contrast check
16
Effects of this elimination Edge check
17
Extracting features Extrema detection Keypoint localization Orientation assignment Local image descriptor
18
Orientation assignment Assign an orientation to a keypoint to make its descriptor invariant to rotation
19
Orientation assignment The orientation of a keypoint is determined in four steps: 1.Determine sample points 2.Determine the gradient magnitude and orientation of each sample point 3.Create an orientation histogram of the sample points 4.Extract the dominant directions from the histogram
20
Step 1: Determine sample points The source image is the Gaussian smoothed image with the closest scale Use all pixels within a certain radius Actual scale Used Gaussian
21
Step 2: Determine gradient magnitude and orientation of each sample point Gradient magnitude: Gradient orientation:
22
Step 2: Determine gradient magnitude and orientation of each sample point Gradient magnitude: Gradient orientation: pixel
23
Step 3: Create an orientation histogram The histogram has 36 bins, each covering 10 degrees Each sample is weighted its gradient magnitude and a Gaussian weighted circular window
24
Step 4: Extract dominant directions Take the peak(s) from the orientation histogram Use all peaks greater than 80% of the highest peak Every direction gets its own keypoint
25
The Local image descriptor Every keypoint now has a location, scale and orientation, from which a repeatable 2D grid can be determined We want distinctive descriptor vectors, partially invariant to illumination and viewpoint changes
26
Computing the Local image descriptor Take the 16 x 16 sample array around the keypoint Compute 4 x 4 orientation histograms from this array Use 8 bins per histogram: 4x4x8=128 features
27
Local image descriptor optimizations Normalize the obtained feature vector to enhance invariance to illumination changes Reduce the influence of large gradient magnitudes by capping the normalized features to 0.2 Normalize again
28
Possible applications for SIFT We have a feature extraction method which yields useful keypoints, what's next? Some appications: Object recognition in images Panorama stitching 3D scene modelling 3D human action tracking (for example for security surveillance) Robot localisation and mapping
29
Panorama stitching
30
Brown, ICCV 2003 Panorama stitching
31
(from Sudderth et al., 2006) 3D modelling
32
Application: SIFT to object recognition We can applicate SIFT to recognize objects in images. Say, we have an image which contains an object. How to recognize? Key idea: Compare keypoints, if these are similar it is likely that it is the same object. First problem: a lot of features arise from background clutter. How to remove these? Possible approach: - Look for clusters of matching features - Look for distance of closest match to the second- closest match
33
Efficiently locating the nearest neighbour 128 dimensional feature vector for each keypoint: no search optimization possible, no better way to find the nearest neighbour than exhaustive search. But: only 3 features are enough to locate objects, for example when occluded. Hough Transform method is used to describe clusters of keypoints as shapes and let them 'vote' for the pose of an object, described in location, orientation and scale.
35
Application: robot vision, localization and mapping Se, S. Lowe, D. G. Little, J. Vision-based Mobile Robot Localization And Mapping using Scale- Invariant Features, 2001 Application of SIFT to mobile robotics SIFT features combined with Simultaneous Localization And Map Building (SLAMB) Recognizing landmarks: estimation of the 10m by 10m lab, 3000 features collected Preliminary results: quite good
37
Conclusions from the paper The keypoints SIFT extracts are indeed invariant to image rotation, scale and robust to affine distortion, noise and change in illumination. SIFT can be optimized to run real-time. The proposed approach (SIFT combined with Hough transform for object recognition) has shown to work reliably.
38
Discussion Is the SIFT method for keypoint extraction the best way to get distinctive features from images? Is SIFT biologically plausible? Is it important to have biologically inspired methods in object recognition / localization?
39
References Main article: Distictive Image Features from Scale-Invariant Keypoints, D. G. Lowe. International Journal of Computer Vision 60, 91-110, 2004. Other articles: Depth from Familiar Objects: A Hierarchical Model for 3D Scenes, Sudderth et al, Proceedings of the 2006 IEEE Conference on computer vision and pattern recognition, volume II, 2410-2417, 2006. Vision-based Mobile Robot Localization And Mapping using Scale-Invariant Features, Se, S. Lowe, D. G. Little, J., 2001
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.