Interest Points and Corners Computer Vision CS 143, Brown James Hays Slides from Rick Szeliski, Svetlana Lazebnik, Derek Hoiem and Grauman&Leibe 2008 AAAI.

Slides:



Advertisements
Similar presentations
Feature extraction: Corners
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Matching with Invariant Features
Computational Photography
Algorithms and Applications in Computer Vision
Locating and Describing Interest Points
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Lecture 4: Feature matching
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Feature extraction: Corners and blobs
Detecting Patterns So far Specific patterns (eyes) Generally useful patterns (edges) Also (new) “Interesting” distinctive patterns ( No specific pattern:
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
Summary of Previous Lecture A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging.
Computational Photography lecture 14 – Matching Keypoints CS 590 Spring 2014 Prof. Alex Berg (Credits to many other folks on individual slides)
Interest Points Computational Photography Derek Hoiem, University of Illinois 10/14/10 Galatea of the Spheres Salvador Dali.
Locating and Describing Interest Points Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/02/10 Acknowledgment: Many keypoint slides.
Image Stitching Tamara Berg CSE 590 Computational Photography Many slides from Alyosha Efros & Derek Hoiem.
CS 558 C OMPUTER V ISION Lecture VII: Corner and Blob Detection Slides adapted from S. Lazebnik.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision Lecture 4 FEATURE DETECTION. Feature Detection Feature Description Feature Matching Today’s Topics 2.
CSE 185 Introduction to Computer Vision Local Invariant Features.
CS 1699: Intro to Computer Vision Local Image Features: Extraction and Description Prof. Adriana Kovashka University of Pittsburgh September 17, 2015.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Local invariant features 1 Thursday October 3 rd 2013 Neelima Chavali Virginia Tech.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Notes on the Harris Detector
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Local features and image matching October 1 st 2015 Devi Parikh Virginia Tech Disclaimer: Many slides have been borrowed from Kristen Grauman, who may.
Local features: detection and description
Features Jan-Michael Frahm.
CS654: Digital Image Analysis
Interest Points Computational Photography Derek Hoiem, University of Illinois 10/23/12 Galatea of the Spheres Salvador Dali.
CS 2750: Machine Learning Bias-Variance Trade-off (cont’d) + Image Representations Prof. Adriana Kovashka University of Pittsburgh January 20, 2016.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
CSE 185 Introduction to Computer Vision Local Invariant Features.
Lecture 10: Harris Corner Detector CS4670/5670: Computer Vision Kavita Bala.
Keypoint extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
MAN-522: Computer Vision Edge detection Feature and blob detection
Interest Points EE/CSE 576 Linda Shapiro.
Local features: main components
3D Vision Interest Points.
Corners and interest points
TP12 - Local features: detection and description
Source: D. Lowe, L. Fei-Fei Recap: edge detection Source: D. Lowe, L. Fei-Fei.
Last lecture: Edges primer
Lecture 4: Harris corner detection
Scale and interest point descriptors
Local features: detection and description May 11th, 2017
Corners and Interest Points
Lecture 4 FEATURE DETECTION
Interest Points and Harris Corner Detector
Lecture 5: Feature detection and matching
Local features and image matching
Lecture VI: Corner and Blob Detection
Lecture 5: Feature invariance
Lecture 5: Feature invariance
Corner Detection COMP 4900C Winter 2008.
Presentation transcript:

Interest Points and Corners Computer Vision CS 143, Brown James Hays Slides from Rick Szeliski, Svetlana Lazebnik, Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Read Szeliski 4.1

Correspondence across views Correspondence: matching points, patches, edges, or regions across images ≈

Example: estimating “fundamental matrix” that corresponds two views Slide from Silvio Savarese

Example: structure from motion

Applications Feature points are used for: – Image alignment – 3D reconstruction – Motion tracking – Robot navigation – Indexing and database retrieval – Object recognition

This class: interest points Note: “interest points” = “keypoints”, also sometimes called “features” Many applications – tracking: which points are good to track? – recognition: find patches likely to tell us something about object category – 3D reconstruction: find correspondences across different views

This class: interest points Suppose you have to click on some point, go away and come back after I deform the image, and click on the same points again. – Which points would you choose? original deformed

Overview of Keypoint Matching K. Grauman, B. Leibe B1B1 B2B2 B3B3 A1A1 A2A2 A3A3 1. Find a set of distinctive key- points 3. Extract and normalize the region content 2. Define a region around each keypoint 4. Compute a local descriptor from the normalized region 5. Match local descriptors

Goals for Keypoints Detect points that are repeatable and distinctive

Key trade-offs More Repeatable More Points B1B1 B2B2 B3B3 A1A1 A2A2 A3A3 Detection of interest points More Distinctive More Flexible Description of patches Robust to occlusion Works with less texture Minimize wrong matches Robust to expected variations Maximize correct matches Robust detection Precise localization

Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters Features Descriptors

Choosing distinctive interest points If you wanted to meet a friend would you say a)“Let’s meet on campus.” b)“Let’s meet on Green street.” c)“Let’s meet at Green and Wright.” – Corner detection Or if you were in a secluded area: a)“Let’s meet in the Plains of Akbar.” b)“Let’s meet on the side of Mt. Doom.” c)“Let’s meet on top of Mt. Doom.” – Blob (valley/peak) detection

Choosing interest points Where would you tell your friend to meet you?

Choosing interest points Where would you tell your friend to meet you?

Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC Slides from Rick Szeliski, Svetlana Lazebnik, and Kristin Grauman

Why extract features? Motivation: panorama stitching We have two images – how do we combine them?

Local features: main components 1)Detection: Identify the interest points 2)Description : Extract vector feature descriptor surrounding each interest point. 3)Matching: Determine correspondence between descriptors in two views Kristen Grauman

Characteristics of good features Repeatability The same feature can be found in several images despite geometric and photometric transformations Saliency Each feature is distinctive Compactness and efficiency Many fewer features than image pixels Locality A feature occupies a relatively small area of the image; robust to clutter and occlusion

Goal: interest operator repeatability We want to detect (at least some of) the same points in both images. Yet we have to be able to run the detection procedure independently per image. No chance to find true matches! Kristen Grauman

Goal: descriptor distinctiveness We want to be able to reliably determine which point goes with which. Must provide some invariance to geometric and photometric differences between the two views. ? Kristen Grauman

Local features: main components 1)Detection: Identify the interest points 2)Description :Extract vector feature descriptor surrounding each interest point. 3)Matching: Determine correspondence between descriptors in two views

Many Existing Detectors Available K. Grauman, B. Leibe Hessian & Harris [Beaudet ‘78], [Harris ‘88] Laplacian, DoG [Lindeberg ‘98], [Lowe 1999] Harris-/Hessian-Laplace [Mikolajczyk & Schmid ‘01] Harris-/Hessian-Affine [Mikolajczyk & Schmid ‘04] EBR and IBR [Tuytelaars & Van Gool ‘04] MSER [Matas ‘02] Salient Regions [Kadir & Brady ‘01] Others…

What points would you choose? Kristen Grauman

Corner Detection: Basic Idea We should easily recognize the point by looking through a small window Shifting a window in any direction should give a large change in intensity “edge”: no change along the edge direction “corner”: significant change in all directions “flat” region: no change in all directions Source: A. Efros

Finding Corners Key property: in the region around a corner, image gradient has two or more dominant directions Corners are repeatable and distinctive C.Harris and M.Stephens. "A Combined Corner and Edge Detector.“ Proceedings of the 4th Alvey Vision Conference: pages "A Combined Corner and Edge Detector.“

Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: I(x, y) E(u, v) E(3,2) w(x, y)

Corner Detection: Mathematics I(x, y) E(u, v) E(0,0) w(x, y) Change in appearance of window w(x,y) for the shift [u,v]:

Corner Detection: Mathematics Intensity Shifted intensity Window function or Window function w(x,y) = Gaussian1 in window, 0 outside Source: R. Szeliski Change in appearance of window w(x,y) for the shift [u,v]:

Corner Detection: Mathematics We want to find out how this function behaves for small shifts Change in appearance of window w(x,y) for the shift [u,v]: E(u, v)

Corner Detection: Mathematics Local quadratic approximation of E(u,v) in the neighborhood of (0,0) is given by the second-order Taylor expansion: We want to find out how this function behaves for small shifts Change in appearance of window w(x,y) for the shift [u,v]:

Corner Detection: Mathematics Second-order Taylor expansion of E(u,v) about (0,0):

Corner Detection: Mathematics Second-order Taylor expansion of E(u,v) about (0,0):

Corner Detection: Mathematics Second-order Taylor expansion of E(u,v) about (0,0):

Corner Detection: Mathematics The quadratic approximation simplifies to where M is a second moment matrix computed from image derivatives: M

Corners as distinctive interest points 2 x 2 matrix of image derivatives (averaged in neighborhood of a point). Notation:

The surface E(u,v) is locally approximated by a quadratic form. Let’s try to understand its shape. Interpreting the second moment matrix

First, consider the axis-aligned case (gradients are either horizontal or vertical) If either λ is close to 0, then this is not a corner, so look for locations where both are large. Interpreting the second moment matrix

Consider a horizontal “slice” of E(u, v): Interpreting the second moment matrix This is the equation of an ellipse.

Consider a horizontal “slice” of E(u, v): Interpreting the second moment matrix This is the equation of an ellipse. The axis lengths of the ellipse are determined by the eigenvalues and the orientation is determined by R direction of the slowest change direction of the fastest change ( max ) -1/2 ( min ) -1/2 Diagonalization of M:

Visualization of second moment matrices

Interpreting the eigenvalues 1 2 “Corner” 1 and 2 are large, 1 ~ 2 ; E increases in all directions 1 and 2 are small; E is almost constant in all directions “Edge” 1 >> 2 “Edge” 2 >> 1 “Flat” region Classification of image points using eigenvalues of M:

Corner response function “Corner” R > 0 “Edge” R < 0 “Flat” region |R| small α : constant (0.04 to 0.06)

Harris corner detector 1)Compute M matrix for each image window to get their cornerness scores. 2)Find points whose surrounding window gave large corner response (f> threshold) 3)Take the points of local maxima, i.e., perform non-maximum suppression C.Harris and M.Stephens. “A Combined Corner and Edge Detector.” Proceedings of the 4th Alvey Vision Conference: pages 147—151, “A Combined Corner and Edge Detector.”

Harris Detector [ Harris88 ] Second moment matrix Image derivatives 2. Square of derivatives 3. Gaussian filter g(  I ) IxIx IyIy Ix2Ix2 Iy2Iy2 IxIyIxIy g(I x 2 )g(I y 2 ) g(I x I y ) 4. Cornerness function – both eigenvalues are strong har 5. Non-maxima suppression (optionally, blur first)

Harris Detector: Steps

Compute corner response R

Harris Detector: Steps Find points with large corner response: R>threshold

Harris Detector: Steps Take only the points of local maxima of R

Harris Detector: Steps

Invariance and covariance We want corner locations to be invariant to photometric transformations and covariant to geometric transformations Invariance: image is transformed and corner locations do not change Covariance: if we have two transformed versions of the same image, features should be detected in corresponding locations

Affine intensity change Only derivatives are used => invariance to intensity shift I  I + b Intensity scaling: I  a I R x (image coordinate) threshold R x (image coordinate) Partially invariant to affine intensity change I  a I + b

Image translation Derivatives and window function are shift-invariant Corner location is covariant w.r.t. translation

Image rotation Second moment ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner location is covariant w.r.t. rotation

Scaling All points will be classified as edges Corner Corner location is not covariant to scaling!

Next Lecture How do we represent the patches around the interest points? How do we make sure that representation is invariant?