Feature description and matching

Slides:



Advertisements
Similar presentations
CSE 473/573 Computer Vision and Image Processing (CVIP)
Advertisements

Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Image alignment Image from
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Robust and large-scale alignment Image from
Patch Descriptors CSE P 576 Larry Zitnick
CS4670: Computer Vision Kavita Bala Lecture 10: Feature Matching and Transforms.
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
Lecture 4: Feature matching
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Distinctive Image Feature from Scale-Invariant KeyPoints
Feature extraction: Corners and blobs
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Blob detection.
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Overview Introduction to local features
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Lecture 4: Feature matching CS4670 / 5670: Computer Vision Noah Snavely.
CS 1699: Intro to Computer Vision Local Image Features: Extraction and Description Prof. Adriana Kovashka University of Pittsburgh September 17, 2015.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Features, Feature descriptors, Matching Jana Kosecka George Mason University.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Local features: detection and description
CS654: Digital Image Analysis
Presented by David Lee 3/20/2006
776 Computer Vision Jan-Michael Frahm Spring 2012.
Image matching by Diva SianDiva Sian by swashfordswashford TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAA.
Announcements Project 1 Out today Help session at the end of class.
Blob detection.
776 Computer Vision Jan-Michael Frahm Spring 2012.
CS262: Computer Vision Lect 09: SIFT Descriptors
Interest Points EE/CSE 576 Linda Shapiro.
Lecture 13: Feature Descriptors and Matching
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Local features: main components
Project 1: hybrid images
Image Features: Points and Regions
TP12 - Local features: detection and description
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/22
Scale and interest point descriptors
Local features: detection and description May 11th, 2017
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
Announcements Project 1 Project 2 (panoramas)
From a presentation by Jimmy Huff Modified by Josiah Yoder
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Announcement Project 1 1.
SIFT keypoint detection
Lecture VI: Corner and Blob Detection
Feature descriptors and matching
Lecture 5: Feature invariance
Lecture 5: Feature invariance
Presentation transcript:

Feature description and matching

Matching feature points We know how to detect good points Next question: How to match them? Two interrelated questions: How do we describe each feature point? How do we match descriptions? ?

Feature descriptor 𝑥 1 𝑥 2 𝑦 1 𝑦 2

Feature matching Measure the distance between (or similarity between) every pair of descriptors 𝒚 𝟏 𝒚 𝟐 𝒙 𝟏 𝑑( 𝑥 1 , 𝑦 1 ) 𝑑( 𝑥 1 , 𝑦 2 ) 𝒙 𝟐 𝑑( 𝑥 2 , 𝑦 1 ) 𝑑( 𝑥 2 , 𝑦 2 )

Invariance vs. discriminability Distance between descriptors should be small even if image is transformed Discriminability: Descriptor should be highly unique for each point (far away from other points in the image)

Image transformations Geometric Rotation Scale Photometric Intensity change

Invariance Most feature descriptors are designed to be invariant to Translation, 2D rotation, scale They can usually also handle Limited 3D rotations (SIFT works up to about 60 degrees) Limited affine transformations (some are fully affine invariant) Limited illumination/contrast changes

How to achieve invariance Design an invariant feature descriptor Simplest descriptor: a single 0 What’s this invariant to? Is this discriminative? Next simplest descriptor: a single pixel

The aperture problem

The aperture problem Use a whole patch instead of a pixel?

SSD Use as descriptor the whole patch Match descriptors using euclidean distance 𝑑 𝑥,𝑦 = ||𝑥 −𝑦| | 2

SSD

SSD

NCC - Normalized Cross Correlation Lighting and color change pixel intensities Example: increase brightness / contrast 𝐼 ′ =𝛼𝐼+𝛽 Subtract patch mean: invariance to 𝛽 Divide by norm of vector: invariance to 𝛼 𝑥′=𝑥− <𝑥> 𝑥′′= 𝑥′ ||𝑥′|| similarity = 𝑥 ′′ ⋅𝑦′′

NCC - Normalized cross correlation

Basic correspondence Image patch as descriptor, NCC as similarity Invariant to? Photometric transformations? Translation? Rotation?

Rotation invariance for feature descriptors Find dominant orientation of the image patch This is given by xmax, the eigenvector of M corresponding to max (the larger eigenvalue) Rotate the patch according to this angle Figure by Matthew Brown

Multiscale Oriented PatcheS descriptor Take 40x40 square window around detected feature Scale to 1/5 size (using prefiltering) Rotate to horizontal Sample 8x8 square window centered at feature Intensity normalize the window by subtracting the mean, dividing by the standard deviation in the window 40 pixels 8 pixels CSE 576: Computer Vision Adapted from slide by Matthew Brown

Detections at multiple scales

Rotation invariance for feature descriptors Find dominant orientation of the image patch This is given by xmax, the eigenvector of M corresponding to max (the larger eigenvalue) Rotate the patch according to this angle Figure by Matthew Brown

Detour: Image transformations What does it mean to rotate a patch? Each pixel has coordinates (x,y) Rotation represented by a matrix R Pixel’s new coordinates: I’(x’,y’) = I(x,y)

Detour: Image transformations What if destination pixel is fractional? Flip computation: for every destination pixel figure out source pixel Use interpolation if source location is fractional I’(x’, y’) = I(x,y)

Multiscale Oriented PatcheS descriptor Take 40x40 square window around detected feature Scale to 1/5 size (using prefiltering) Rotate to horizontal Sample 8x8 square window centered at feature Intensity normalize the window by subtracting the mean, dividing by the standard deviation in the window 40 pixels 8 pixels CSE 576: Computer Vision Adapted from slide by Matthew Brown

MOPS descriptor You can combine transformations together to get the final transformation x T = ? Pass this transformation matrix to cv2.warpAffine y

Translate x y T = MT1

Rotate x y T = MRMT1

Scale x y T = MSMRMT1

Translate x y T = MT2MSMRMT1

Crop x cv2.warpAffine also takes care of the cropping y

Detections at multiple scales

Invariance of MOPS Intensity Scale Rotation

Color and Lighting

Out-of-plane rotation

Better representation than color: Edges Depth Discontinuity Normal discontinuity Albedo Edge Shadow

Towards a better feature descriptor Match pattern of edges Edge orientation – clue to shape Be resilient to small deformations Deformations might move pixels around, but slightly Deformations might change edge orientations, but slightly

Invariance to deformation by quantization 37 42 Between 30 and 45

Invariance to deformation by quantization

Spatial invariance by histograms 2 blue balls, one red box 2 1 balls boxes

Rotation Invariance by Orientation Normalization [Lowe, SIFT, 1999] Compute orientation histogram Select dominant orientation Normalize: rotate to fixed orientation 2 p T. Tuytelaars, B. Leibe

The SIFT descriptor SIFT – Lowe IJCV 2004 Do scale-space invariant feature detection using DoG. SIFT – Lowe IJCV 2004

Scale Invariant Feature Transform Basic idea: DoG for scale-space feature detection Take 16x16 square window around detected feature Compute gradient orientation for each pixel Throw out weak edges (threshold gradient magnitude) Create histogram of surviving edge orientations 2 angle histogram Adapted from slide by David Lowe

SIFT descriptor Create histogram Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown below) Compute an orientation histogram for each cell 16 cells * 8 orientations = 128 dimensional descriptor Adapted from slide by David Lowe

SIFT vector formation Computed on rotated and scaled version of window according to computed orientation & scale resample the window Based on gradients weighted by a Gaussian

Ensure smoothness Trilinear interpolation a given gradient contributes to 8 bins: 4 in space times 2 in orientation

Reduce effect of illumination 128-dim vector normalized to 1 Threshold gradient magnitudes to avoid excessive influence of high gradients after normalization, clamp gradients >0.2 renormalize

Properties of SIFT Extraordinarily robust matching technique Can handle changes in viewpoint Up to about 60 degree out of plane rotation Can handle significant changes in illumination Sometimes even day vs. night (below) Fast and efficient—can run in real time Lots of code available: http://people.csail.mit.edu/albert/ladypack/wiki/index.php/Known_implementations_of_SIFT

Summary Keypoint detection: repeatable and distinctive Corners, blobs, stable regions Harris, DoG Descriptors: robust and selective spatial histograms of orientation SIFT and variants are typically good for stitching and recognition But, need not stick to one

Which features match?

Feature matching Given a feature in I1, how to find the best match in I2? Define distance function that compares two descriptors Test all the features in I2, find the one with min distance

Feature distance How to define the difference between two features f1, f2? Simple approach: L2 distance, ||f1 - f2 || can give good scores to ambiguous (incorrect) matches f1 f2 I1 I2

Feature distance How to define the difference between two features f1, f2? Better approach: ratio distance = ||f1 - f2 || / || f1 - f2’ || f2 is best SSD match to f1 in I2 f2’ is 2nd best SSD match to f1 in I2 gives large values for ambiguous matches f1 f2' f2 I1 I2