Image Features Computer Vision 6.869 Bill Freeman Image and shape descriptors: Harris corner detectors and SIFT features. With slides from Darya Frolova,

Slides:



Advertisements
Similar presentations
Feature extraction: Corners
Advertisements

CSE 473/573 Computer Vision and Image Processing (CVIP)
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
CSE 473/573 Computer Vision and Image Processing (CVIP)
Matching with Invariant Features
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Computational Photography
Invariant Features.
Algorithms and Applications in Computer Vision
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Lecture 4: Feature matching
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Distinctive Image Feature from Scale-Invariant KeyPoints
Feature extraction: Corners and blobs
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Scale Invariant Feature Transform (SIFT)
Image Features: Descriptors and matching
Blob detection.
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
1 Image Features Hao Jiang Sept Image Matching 2.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
Local Invariant Features This is a compilation of slides by: Darya Frolova, Denis Simakov,The Weizmann Institute of Science Jiri Matas, Martin Urban Center.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Feature Detection. Image features Global features –global properties of an image, including intensity histogram, frequency domain descriptors, covariance.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
CSE 185 Introduction to Computer Vision Local Invariant Features.
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Notes on the Harris Detector
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Features Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/15 with slides by Trevor Darrell Cordelia Schmid, David Lowe, Darya Frolova, Denis Simakov,
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Matching features Computational Photography, Prof. Bill Freeman April 11, 2006 Image and shape descriptors: Harris corner detectors and SIFT features.
Local features: detection and description
Features Jan-Michael Frahm.
CS654: Digital Image Analysis
Presented by David Lee 3/20/2006
CSE 185 Introduction to Computer Vision Local Invariant Features.
Keypoint extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Blob detection.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT Scale-Invariant Feature Transform David Lowe
Interest Points EE/CSE 576 Linda Shapiro.
Presented by David Lee 3/20/2006
3D Vision Interest Points.
TP12 - Local features: detection and description
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/22
Feature description and matching
Local Invariant Features
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Interest Points & Descriptors 3 - SIFT
Edge detection f(x,y) viewed as a smooth function
Lecture VI: Corner and Blob Detection
Feature descriptors and matching
Presented by Xu Miao April 20, 2005
Lecture: Feature Descriptors
Presentation transcript:

Image Features Computer Vision Bill Freeman Image and shape descriptors: Harris corner detectors and SIFT features. With slides from Darya Frolova, Denis Simakov, David Lowe szeliski 2.1.5

2 depict magical feature finder take a photo. finds distinctive features. matches their appearance in another image of the same thing (under different lighting or viewpoint etc) conditions. match the appearance of a feature on a similar thing. feature found should be: rich enough to be distinctive. ((hey, read over this randomness stuff for iris features)) what could you do with such a magical feature? make panoramas (easy) make 3d (harder) make matches over time (harder) make matches within an object category (harder) 2

3 object pov 1

44

5

6

7

8

9

Uses for feature point detectors and descriptors in computer vision and graphics. –Image alignment and building panoramas –3D reconstruction –Motion tracking –Object recognition –Indexing and database retrieval –Robot navigation –… other

How do we build a panorama? We need to match (align) images

Matching with Features Detect feature points in both images

Matching with Features Detect feature points in both images Find corresponding pairs

Matching with Features Detect feature points in both images Find corresponding pairs Use these matching pairs to align images - the required mapping is called a homography.

Matching with Features Problem 1: –Detect the same point independently in both images no chance to match! We need a repeatable detector counter-example:

Matching with Features Problem 2: –For each point correctly recognize the corresponding one ? We need a reliable and distinctive descriptor

Building a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003

18 x, y in Harris Taylor Descriptor needs work some pics are missing around eigenvalues in the real image x, y in Harris Taylor Descriptor needs work some pics are missing around eigenvalues in the real image

19 DESCRIPTOR PART NEEDS WORK: more illustration. Say that simplest descriptor is pixel values. Show examples of feature neighborhoods in real pano images 19

Last time We assumed we know feature correspondences between images Align the images into a virtual wider-angle view –mapping is a homography –given 4 pairs of points, linear system Today: automatic correspondence Hpp’

21 What correspondences would you pick? Goal: click on same scene point in both images 21

22 Discussion In this lecture, we’ll seek to derive detectors and descriptors that work perfectly But in reality, they will never be 100% perfect Next time, we’ll see how we can use the large number of feature points we can extract to reject outliers 22

Preview Descriptor detector location Note: here viewpoint is different, not panorama (they show off) Detector: detect same scene points independently in both images Descriptor: encode local neighboring window –Note how scale & rotation of window are the same in both image (but computed independently) Correspondence: find most similar descriptor in other image

24 Outline Feature point detection –Harris corner detector –finding a characteristic scale Local image description –SIFT features 24

25 Important We run the detector and compute descriptors independently in the various images. It is only at the end, given a bag of feature points and their descriptors, that we compute correspondences But for the derivation, we will look how the detector and descriptor behave for corresponding scene points, to check for consistency 25

Selecting Good Features What is a “good feature”? –Satisfies brightness constancy –Has sufficient texture variation –Does not have too much texture variation –Corresponds to a “real” surface patch –Does not deform too much over time

27 Goals Detector (location of feature point) –precisely localized in space Descriptor (used to match between images) 27

More motivation… Feature points are used also for: –Image alignment (homography, fundamental matrix) –3D reconstruction –Motion tracking –Object recognition –Indexing and database retrieval –Robot navigation –… other

Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant

Harris corner detector C.Harris, M.Stephens. “A Combined Corner and Edge Detector” Darya Frolova, Denis Simakov The Weizmann Institute of Science

The Basic Idea We should easily localize the point by looking through a small window Shifting a window in any direction should give a large change in pixels intensities in window –makes location precisely define Darya Frolova, Denis Simakov The Weizmann Institute of Science

Corner Detector: Basic Idea “flat” region: no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions Darya Frolova, Denis Simakov The Weizmann Institute of Science

The Moravec Detector Compute SSD (sum square difference) between window & 8 neighbor windows (shifted by 1 pixel) Corner strength: minimum difference with neighbors Problem: not rotation invariant because we take a bigger step in diagonal, and we only check a small number of directions (8)

Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant

Harris Detector: Mathematics Window-averaged squared change of intensity induced by shifting the image data by [u,v]: Intensity Shifted intensity Window function or Window function w(x,y) = Gaussian1 in window, 0 outside Darya Frolova, Denis Simakov The Weizmann Institute of Science

Taylor series approximation to shifted image gives quadratic form for error as function of image shifts.

Harris Detector: Mathematics Expanding I(x,y) in a Taylor series expansion, we have, for small shifts [u,v ], a quadratic approximation to the error surface between a patch and itself, shifted by [u,v ]: where M is a 2  2 matrix computed from image derivatives: M is often called structure tensor Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Mathematics Intensity change in shifting window: eigenvalue analysis  1,  2 – eigenvalues of M direction of the slowest change direction of the fastest change (  max ) -1/2 (  min ) -1/2 Ellipse E(u,v) = const Iso-contour of the squared error, E(u,v) Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Mathematics Start with Moravec: Window-averaged change of intensity for the shift [u,v]: Intensity Shifted intensity Window function or Window function w(x,y) = Gaussian1 in window, 0 outside x, y is a little confusing because there is no window center notion

Go through Taylor series expansion on board Need to work on how to get to the matrix version

Harris Detector: Mathematics Expanding E(u,v) in a order Taylor series expansion, we have,for small shifts [u,v ], a bilinear approximation: where M is a 2  2 matrix computed from image 1st derivatives: M is often called structure tensor

Selecting Good Features  1 and  2 are large Image patch Error surface Darya Frolova, Denis Simakov The Weizmann Institute of Science 12x10^5

Selecting Good Features large  1, small  2 Image patch Error surface Darya Frolova, Denis Simakov The Weizmann Institute of Science 9x10^5

Selecting Good Features small  1, small  2 (contrast auto-scaled) Image patch Error surface (vertical scale exaggerated relative to previous plots) Darya Frolova, Denis Simakov The Weizmann Institute of Science 200

Harris Detector: Mathematics 11 22 “Corner”  1 and  2 are large,  1 ~  2 ; E increases in all directions  1 and  2 are small; E is almost constant in all directions “Edge”  1 >>  2 “Edge”  2 >>  1 “Flat” region Classification of image points using eigenvalues of M: Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Mathematics Measure of corner response: (k – empirical constant, k = ) (Shi-Tomasi variation: use min(λ 1,λ 2 ) instead of R) Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Mathematics 11 22 “Corner” “Edge” “Flat” R depends only on eigenvalues of M R is large for a corner R is negative with large magnitude for an edge |R| is small for a flat region R > 0 R < 0 |R| small Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector The Algorithm: –Find points with large corner response function R (R > threshold) –Take the points of local maxima of R

49 Harris corner detector algorithm Compute image gradients I x I y for all pixels For each pixel –Compute by looping over neighbors x,y –compute Find points with large corner response function R (R > threshold) Take the points of locally maximum R as the detected feature points (ie, pixels where R is bigger than for all the 4 or 8 neighbors) Darya Frolova, Denis Simakov The Weizmann Institute of Science

50 Algorithm Compute image gradients Ix Iy for all pixels For each pixel –Compute by looping over neighbors x,y –compute Find points with large corner response function R (R > threshold) Take the points of local maxima of R 50 last two steps messy to explain

Harris Detector The Algorithm: –Find points with large corner response function R (R > threshold) –Take the points of local maxima of R

Harris Detector: Workflow Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Workflow Compute corner response R Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Workflow Find points with large corner response: R>threshold Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Workflow Take only the points of local maxima of R Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Workflow Take only the points of local maxima of R MAKE POINTS MORE VISIBLE

Harris Detector: Workflow Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Summary Average intensity change in direction [u,v] can be expressed as a bilinear form: Describe a point in terms of eigenvalues of M : measure of corner response A good (corner) point should have a large intensity change in all directions, i.e. R should be large and positive Darya Frolova, Denis Simakov The Weizmann Institute of Science

59 Comments on Harris detector We haven’t analyzed how consistent in position the local- max-of-R feature is. We’ll show some results on that later. If we reverse the image contrast, the local-max-of-R feature gives the exact same feature positions--do we want that? How might you address the problem below: Camera A view Camera B view Bad feature (indistinguishable locally from a good feature) Good feature

60 Analysis of Harris corner detector invariance properties Geometry –rotation –scale Photometry –intensity change 60

Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant

62 Possible changes between two images from same viewpoint Geometry –rotation along optical axis (2D rotation) –perspective distortion (wide angle) –scale (different zoom, or local scale due to perspective distortion) –radial distortion (lens aberration) Photometry (pixel values) –lighting changes, clouds moves –exposure not the same –vignetting 62

Models of Image Change Geometry –Rotation –Similarity (rotation + uniform scale) –Affine (scale dependent on direction) valid for: orthographic camera, locally planar object Photometry –Affine intensity change ( I → a I + b)

We want to: detect the same interest points regardless of image changes

Harris Detector: Some Properties Rotation invariance?

Harris Detector: Some Properties Rotation invariance Ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner response R is invariant to image rotation Eigen analysis allows us to work in the canonical frame of the linear form.

Rotation Invariant Detection Harris Corner Detector C.Schmid et.al. “Evaluation of Interest Point Detectors”. IJCV 2000 This gives us rotation invariant detection, but we’ll need to do more to ensure a rotation invariant descriptor... ImpHarris: derivatives are computed more precisely by replacing the [ ] mask with derivatives of a Gaussian (sigma = 1).

Evaluation plots are from this paper

Harris Detector: Some Properties Invariance to image intensity change?

Harris Detector: Some Properties Partial invariance to additive and multiplicative intensity changes Only derivatives are used => invariance to intensity shift I → I + b Intensity scaling: I → a I fine, except for the threshold that’s used to specify when R is large enough. R x (image coordinate) threshold R x (image coordinate) Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Some Properties Invariant to image scale? image zoomed image

Harris Detector: Some Properties Not invariant to image scale! All points will be classified as edges Corner ! Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Some Properties Quality of Harris detector for different scale changes Repeatability rate: # correspondences # possible correspondences C.Schmid et.al. “Evaluation of Interest Point Detectors”. IJCV Darya Frolova, Denis Simakov The Weizmann Institute of Science

Harris Detector: Some Properties Quality of Harris detector for different scale changes Repeatability rate: # correspondences # possible correspondences C.Schmid et.al. “Evaluation of Interest Point Detectors”. IJCV Darya Frolova, Denis Simakov The Weizmann Institute of Science

75 Note So far we have not talked about how to compute correspondences We just want to make sure that the same points are detected in both images, so that the later correspondence step has a chance to work 75

Contents Harris Corner Detector –Description –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant

Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant

78 Goals for handling scale/rotation Ensure invariance –Same scene points must be detected in all images Prepare terrain for descriptor –Descriptors are based on a local neighborhood –Find canonical scale and angle for neighborhood (independently in all images)

Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant

Scale Invariant Detection Consider regions (e.g. circles) of different sizes around a point Regions of corresponding sizes will look the same in both images The features look the same to these two operators. Darya Frolova, Denis Simakov The Weizmann Institute of Science

Scale Invariant Detection The problem: how do we choose corresponding circles independently in each image? Do objects in the image have a characteristic scale that we can identify? Darya Frolova, Denis Simakov The Weizmann Institute of Science

Detection over scale Requires a method to repeatably select points in location and scale: The only reasonable scale-space kernel is a Gaussian (Koenderink, 1984; Lindeberg, 1994) An efficient choice is to detect peaks in the difference of Gaussian pyramid (Burt & Adelson, 1983; Crowley & Parker, 1984 – but examining more scales) Difference-of-Gaussian with constant ratio of scales is a close approximation to Lindeberg’s scale-normalized Laplacian (can be shown from the heat diffusion equation)

Scale Invariant Detection Functions for determining scale Kernels: where Gaussian Note: both kernels are invariant to scale and rotation (Laplacian: 2nd derivative of Gaussian) (Difference of Gaussians) Darya Frolova, Denis Simakov The Weizmann Institute of Science

Scale Invariant Detectors Harris-Laplacian 1 Find local maximum of: –Harris corner detector in space (image coordinates) –Laplacian in scale 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 scale x y ← Harris → ← Laplacian → Darya Frolova, Denis Simakov The Weizmann Institute of Science

85 Laplacian of Gaussian for selection of characteristic scale

Scale Invariant Detectors Harris-Laplacian 1 Find local maximum of: –Harris corner detector in space (image coordinates) –Laplacian in scale 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004 scale x y ← Harris → ← Laplacian → Darya Frolova, Denis Simakov The Weizmann Institute of Science SIFT (Lowe) 2 Find local maximum of: –Difference of Gaussians in space and scale scale x y ← DoG →

87 Scale-space example: 3 bumps of different widths. scale space 1-d bumps display as an image blur with Gaussians of increasing width

Scale Invariant Detection Solution: –Design a function on the region (circle), which is “scale invariant” (the same response for corresponding, scaled regions, even if they are at different scales) Example : average intensity. For corresponding regions (even of different sizes) it will be the same. scale = 1/2 –For a point in one image, we can consider it as a function of region size (circle radius) f region size Image 1 f region size Image 2 Darya Frolova, Denis Simakov The Weizmann Institute of Science

Scale Invariant Detection Solution: –Design a function on the region (circle), which is “scale invariant” (the same for corresponding regions, even if they are at different scales) Example : average intensity. For corresponding regions (even of different sizes) it will be the same. scale = 1/2 –For a point in one image, we can consider it as a function of region size (circle radius) f region size Image 1 f region size Image 2 NEEDS WORK

Scale Invariant Detection Common approach: scale = 1/2 f region size Image 1 f region size Image 2 Take a local maximum (wrt radius) of this function Observation : region size, for which the maximum is achieved, should be invariant to image scale, and only sensitive to the underlying image data. s1s1 s2s2 Important: this scale invariant region size is found in each image independently! Darya Frolova, Denis Simakov The Weizmann Institute of Science

Scale Invariant Detection A “good” function for scale detection: has one stable sharp peak f region size bad f region size bad f region size Good ! For usual images: a good function would be a one which responds to contrast (sharp local intensity change) Darya Frolova, Denis Simakov The Weizmann Institute of Science

92

93 Gaussian and difference-of-Gaussian filters 93

94 scale space The bumps, filtered by difference-of- Gaussian filters

95 cross-sections along red lines plotted next slide The bumps, filtered by difference-of- Gaussian filters abc

96 Scales of peak responses are proportional to bump width (the characteristic scale of each bump): [1.7, 3, 5.2]./ [5, 9, 15] = sigma = 1.7 sigma = 3 sigma = a b c Diff of Gauss filter giving peak response

97 Scales of peak responses are proportional to bump width (the characteristic scale of each bump): [1.7, 3, 5.2]./ [5, 9, 15] = Note that the max response filters each has the same relationship to the bump that it favors (the zero crossings of the filter are about at the bump edges). So the scale space analysis correctly picks out the “characteristic scale” for each of the bumps. More generally, this happens for the features of the images we analyze.

Scale Invariant Detectors K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 Experimental evaluation of detectors w.r.t. scale change Repeatability rate: # correspondences # possible correspondences Darya Frolova, Denis Simakov The Weizmann Institute of Science

Scale Invariant Detection Functions for determining scale Kernels: where Gaussian Note: both kernels are invariant to scale and rotation (Laplacian: 2nd derivative of Gaussian) (Difference of Gaussians) NEEDS WORK

Scale Invariant Detection Compare to human vision: eye’s response Shimon Ullman, Introduction to Computer and Human Vision Course, Fall Darya Frolova, Denis Simakov The Weizmann Institute of Science

blob detection; Marr 1982; Voorhees and Poggio 1987; Blostein and Ahuja 1989; … tracedet scale From Lindeberg 1998

Scale Invariant Detection: Summary Given: two images of the same scene with a large scale difference between them Goal: find the same interest points independently in each image Solution: search for maxima of suitable functions in scale and in space (over the image) Methods: 1.Harris-Laplacian [Mikolajczyk, Schmid]: maximize Laplacian over scale, Harris’ measure of corner response over the image SIFT [Lowe]: maximize Difference of Gaussians over scale and space

Scale space processed one octave at a time Darya Frolova, Denis Simakov The Weizmann Institute of Science

Repeatability vs number of scales sampled per octave David G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 60, 2 (2004), pp

Some details of key point localization over scale and space Detect maxima and minima of difference-of-Gaussian in scale space Fit a quadratic to surrounding values for sub-pixel and sub-scale interpolation (Brown & Lowe, 2002) Taylor expansion around point: Offset of extremum (use finite differences for derivatives): Darya Frolova, Denis Simakov The Weizmann Institute of Science

Scale Invariant Detectors K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 Experimental evaluation of detectors w.r.t. scale change Repeatability rate: # correspondences # possible correspondences

Scale and Rotation Invariant Detection: Summary Given: two images of the same scene with a large scale difference and/or rotation between them Goal: find the same interest points independently in each image Solution: search for maxima of suitable functions in scale and in space (over the image). Also, find characteristic orientation. Methods: 1.Harris-Laplacian [Mikolajczyk, Schmid]: maximize Laplacian over scale, Harris’ measure of corner response over the image 2.SIFT [Lowe]: maximize Difference of Gaussians over scale and space

Example of keypoint detection

Recap We have scale & rotation invariant distinctive feature We have a characteristic scale We have a characteristic orientation Now we need to find a descriptor of this point –For matching with other images

Recall: Matching with Features Problem 1: –Detect the same point independently in both images no chance to match! We need a repeatable detector: DONE Good Darya Frolova, Denis Simakov The Weizmann Institute of Science

Recall: Matching with Features Problem 2: –For each point correctly recognize the corresponding one ? We need a reliable and distinctive descriptor Darya Frolova, Denis Simakov The Weizmann Institute of Science

Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant

Point Descriptors We know how to detect points Next question: How to match them? ? Point descriptor should be: 1.Invariant 2.Distinctive

Contents Harris Corner Detector –Overview –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant

Contents Harris Corner Detector –Overview –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant

Descriptors Invariant to Rotation (1) Harris corner response measure: depends only on the eigenvalues of the matrix M C.Harris, M.Stephens. “A Combined Corner and Edge Detector”. 1988

Descriptors Invariant to Rotation (2) Image moments in polar coordinates J.Matas et.al. “Rotational Invariants for Wide-baseline Stereo”. Research Report of CMP, 2003 Rotation in polar coordinates is translation of the angle:  →  +  0 This transformation changes only the phase of the moments, but not its magnitude Rotation invariant descriptor consists of magnitudes of moments: Matching is done by comparing vectors [|m kl |] k,l

Descriptors Invariant to Rotation (3) Find local orientation Dominant direction of gradient Compute image derivatives relative to this orientation 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004

Contents Harris Corner Detector –Overview –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant

Descriptors Invariant to Scale Use the scale determined by detector to compute descriptor in a normalized frame For example: moments integrated over an adapted window derivatives adapted to scale: sI x

Contents Harris Corner Detector –Overview –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant

Feature detector and descriptor summary Stable (repeatable) feature points can be detected regardless of image changes –Scale: search for correct scale as maximum of appropriate function –Affine: approximate regions with ellipses (this operation is affine invariant) Invariant and distinctive descriptors can be computed –Invariant moments –Normalizing with respect to scale and affine transformation

123 Outline Feature point detection –Harris corner detector –finding a characteristic scale Local image description –SIFT features 123

Recall: Matching with Features Problem 1: –Detect the same point independently in both images no chance to match! We need a repeatable detector Good

Recall: Matching with Features Problem 2: –For each point correctly recognize the corresponding one ? We need a reliable and distinctive descriptor

Assume we've found a Rotation & Scale Invariant Point Harris-Laplacian 1 Find local maximum of: –Harris corner detector in space (image coordinates) –Laplacian in scale 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004 scale x y ← Harris → ← Laplacian → SIFT (Lowe) 2 Find local maximum of: –Difference of Gaussians in space and scale scale x y ← DoG →

Freeman et al,

128 Freeman et al,

CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia

Advantages of invariant local features Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual features can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance Extensibility: can easily be extended to wide range of differing feature types, with each adding robustness

SIFT vector formation Computed on rotated and scaled version of window according to computed orientation & scale –resample the window Based on gradients weighted by a Gaussian of variance half the window (for smooth falloff)

SIFT vector formation 4x4 array of gradient orientation histograms –not really histogram, weighted by magnitude 8 orientations x 4x4 array = 128 dimensions Motivation: some sensitivity to spatial layout, but not too much. showing only 2x2 here but is 4x4

SIFT vector formation Thresholded image gradients are sampled over 16x16 array of locations in scale space Create array of orientation histograms 8 orientations x 4x4 histogram array = 128 dimensions showing only 2x2 here but is 4x4

Ensure smoothness Gaussian weight Trilinear interpolation –a given gradient contributes to 8 bins: 4 in space times 2 in orientation

Reduce effect of illumination 128-dim vector normalized to 1 Threshold gradient magnitudes to avoid excessive influence of high gradients –after normalization, clamp gradients >0.2 –renormalize

136 Tuning and evaluating the SIFT descriptors Database images were subjected to rotation, scaling, affine stretch, brightness and contrast changes, and added noise. Feature point detectors and descriptors were compared before and after the distortions, and evaluated for: Sensitivity to number of histogram orientations and subregions. Stability to noise. Stability to affine change. Feature distinctiveness 136

Sensitivity to number of histogram orientations and subregions, n.

Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features

Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features

Feature stability to affine change Match features after random change in image scale & orientation, with 2% image noise, and affine distortion Find nearest neighbor in database of 30,000 features

Affine Invariant Descriptors Find affine normalized frame J.Matas et.al. “Rotational Invariants for Wide-baseline Stereo”. Research Report of CMP, 2003 A A1A1 A2A2 rotation Compute rotational invariant descriptor in this normalized frame

Distinctiveness of features Vary size of database of features, with 30 degree affine change, 2% image noise Measure % correct for single nearest neighbor match

Application of invariant local features to object (instance) recognition. Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features

Ratio of distances reliable for matching

SIFT features impact A good SIFT features tutorial: By Estrada, Jepson, and Fleet. The original SIFT paper: SIFT feature paper citations: Distinctive image features from scale-invariant keypointsDG Lowe - International journal of computer vision, Springer International Journal of Computer Vision 60(2), 91–110, 2004 cс 2004 Kluwer Academic Publishers. Computer Science Department, University of British Columbia...Cited by 6537

Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features

Feature stability to affine change Match features after random change in image scale & orientation, with 2% image noise, and affine distortion Find nearest neighbor in database of 30,000 features

Distinctiveness of features Vary size of database of features, with 30 degree affine change, 2% image noise Measure % correct for single nearest neighbor match

Recap Stable (repeatable) feature points can be detected regardless of image changes –Scale: search for correct scale as maximum of appropriate function Invariant and distinctive descriptors can be computed

Now we have Well-localized feature points Distinctive descriptor Now we need to –match pairs of feature points in different images –Robustly compute homographies (in the presence of errors/outliers)

155 I think I can delete all the slides below 155

CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia

Assume we've found a Rotation & Scale Invariant Point Harris-Laplacian 1 Find local maximum of: –Harris corner detector in space (image coordinates) –Laplacian in scale 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004 scale x y ← Harris → ← Laplacian → SIFT (Lowe) 2 Find local maximum of: –Difference of Gaussians in space and scale scale x y ← DoG →

Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features

Freeman et al,

Advantages of invariant local features Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual features can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance Extensibility: can easily be extended to wide range of differing feature types, with each adding robustness

SIFT vector formation Computed on rotated and scaled version of window according to computed orientation & scale –resample a 16x16 version of the window Based on gradients weighted by a Gaussian of variance half the window (for smooth falloff)

SIFT vector formation 4x4 array of gradient orientation histograms –not really histogram, weighted by magnitude 8 orientations x 4x4 array = 128 dimensions Motivation: some sensitivity to spatial layout, but not too much. showing only 2x2 here but is 4x4

SIFT vector formation Thresholded image gradients are sampled over 16x16 array of locations in scale space Create array of orientation histograms 8 orientations x 4x4 histogram array = 128 dimensions showing only 2x2 here but is 4x4

Ensure smoothness Gaussian weight Trilinear interpolation –a given gradient contributes to 8 bins: 4 in space times 2 in orientation

Reduce effect of illumination 128-dim vector normalized to 1 Threshold gradient magnitudes to avoid excessive influence of high gradients –after normalization, clamp gradients >0.2 –renormalize

Select canonical orientation Create histogram of local gradient directions computed at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale, orientation) One option: use maximum eigenvector in Harris Alternative

Affine Invariant Descriptors Find affine normalized frame J.Matas et.al. “Rotational Invariants for Wide-baseline Stereo”. Research Report of CMP, 2003 A A1A1 A2A2 rotation Compute rotational invariant descriptor in this normalized frame

Sensitivity to number of histogram orientations

Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features

Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features

Feature stability to affine change Match features after random change in image scale & orientation, with 2% image noise, and affine distortion Find nearest neighbor in database of 30,000 features

Distinctiveness of features Vary size of database of features, with 30 degree affine change, 2% image noise Measure % correct for single nearest neighbor match

SIFT – Scale Invariant Feature Transform 1 Empirically found 2 to show very good performance, invariant to image rotation, scale, intensity change, and to moderate affine transformations 1 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. IJCV K.Mikolajczyk, C.Schmid. “A Performance Evaluation of Local Descriptors”. CVPR 2003 Scale = 2.5 Rotation = 45 0

Ratio of distances reliable for matching

A good SIFT features tutorial By Estrada, Jepson, and Fleet. The original paper:

Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features

Feature stability to affine change Match features after random change in image scale & orientation, with 2% image noise, and affine distortion Find nearest neighbor in database of 30,000 features

Distinctiveness of features Vary size of database of features, with 30 degree affine change, 2% image noise Measure % correct for single nearest neighbor match

Recap Stable (repeatable) feature points can be detected regardless of image changes –Scale: search for correct scale as maximum of appropriate function Invariant and distinctive descriptors can be computed

Now we have Well-localized feature points Distinctive descriptor Next lecture (Tuesday) we need to –match pairs of feature points in different images –Robustly compute homographies (in the presence of errors/outliers)

Feature matching Exhaustive search –for each feature in one image, look at all the other features in the other image(s) –Usually not so bad Hashing –compute a short descriptor from each feature vector, or hash longer descriptors (randomly) Nearest neighbor techniques –k-trees and their variants (Best Bin First)

Nearest neighbor techniques k-D tree and Best Bin First (BBF) Indexing Without Invariants in 3D Object Recognition, Beis and Lowe, PAMI’99