TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.

Slides:



Advertisements
Similar presentations
Local features: detection and description Tuesday October 8 Devi Parikh Virginia Tech Slide credit: Kristen Grauman 1 Disclaimer: Most slides have been.
Advertisements

CSE 473/573 Computer Vision and Image Processing (CVIP)
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Monday March 21 Prof. Kristen Grauman UT-Austin Image formation.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Image alignment Image from
Computational Photography
Robust and large-scale alignment Image from
Patch Descriptors CSE P 576 Larry Zitnick
Algorithms and Applications in Computer Vision
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
Affine Linear Transformations Prof. Graeme Bailey (notes modified from Noah Snavely, Spring 2009)
Lecture 4: Feature matching
1 Interest Operator Lectures lecture topics –Interest points 1 (Linda) interest points, descriptors, Harris corners, correlation matching –Interest points.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Feature extraction: Corners and blobs
Scale Invariant Feature Transform (SIFT)
Automatic Matching of Multi-View Images
Blob detection.
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Computer vision.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
Local invariant features Cordelia Schmid INRIA, Grenoble.
CS 558 C OMPUTER V ISION Lecture VII: Corner and Blob Detection Slides adapted from S. Lazebnik.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Lecture 4: Feature matching CS4670 / 5670: Computer Vision Noah Snavely.
CS 1699: Intro to Computer Vision Local Image Features: Extraction and Description Prof. Adriana Kovashka University of Pittsburgh September 17, 2015.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Local invariant features 1 Thursday October 3 rd 2013 Neelima Chavali Virginia Tech.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Local features and image matching October 1 st 2015 Devi Parikh Virginia Tech Disclaimer: Many slides have been borrowed from Kristen Grauman, who may.
Local features: detection and description
CS654: Digital Image Analysis
776 Computer Vision Jan-Michael Frahm Spring 2012.
CSE 185 Introduction to Computer Vision Local Invariant Features.
Blob detection.
Lecture 5: Feature detection and matching CS4670 / 5670: Computer Vision Noah Snavely.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Interest Points EE/CSE 576 Linda Shapiro.
Lecture 13: Feature Descriptors and Matching
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
TP12 - Local features: detection and description
Source: D. Lowe, L. Fei-Fei Recap: edge detection Source: D. Lowe, L. Fei-Fei.
Scale and interest point descriptors
Local features: detection and description May 11th, 2017
Feature description and matching
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Local features and image matching
SIFT keypoint detection
Lecture VI: Corner and Blob Detection
Feature descriptors and matching
Lecture 5: Feature invariance
Lecture 5: Feature invariance
Local features and image matching May 7th, 2019
Lecture: Feature Descriptors
Presentation transcript:

TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman

Today Local invariant features –Detection of interest points (Harris corner detection) Scale invariant blob detection: LoG –Description of local patches SIFT : Histograms of oriented gradients

Local features: main components 1)Detection: Identify the interest points 2)Description :Extract vector feature descriptor surrounding each interest point. 3)Matching: Determine correspondence between descriptors in two views Kristen Grauman

Goal: interest operator repeatability We want to detect (at least some of) the same points in both images. Yet we have to be able to run the detection procedure independently per image. No chance to find true matches!

Goal: descriptor distinctiveness We want to be able to reliably determine which point goes with which. Must provide some invariance to geometric and photometric differences between the two views. ?

Local features: main components 1)Detection: Identify the interest points 2)Description :Extract vector feature descriptor surrounding each interest point. 3)Matching: Determine correspondence between descriptors in two views

Recall: Corners as distinctive interest points 2 x 2 matrix of image derivatives (averaged in neighborhood of a point). Notation:

Since M is symmetric, we have The eigenvalues of M reveal the amount of intensity change in the two principal orthogonal gradient directions in the window. Recall: Corners as distinctive interest points

“flat” region 1 and 2 are small; “edge”: 1 >> 2 2 >> 1 “corner”: 1 and 2 are large, 1 ~ 2 ; One way to score the cornerness: Recall: Corners as distinctive interest points

Harris corner detector 1)Compute M matrix for image window surrounding each pixel to get its cornerness score. 2)Find points with large corner response (f > threshold) 3)Take the points of local maxima, i.e., perform non- maximum suppression

Harris Detector: Steps

Compute corner response f

Harris Detector: Steps Find points with large corner response: f > threshold

Harris Detector: Steps Take only the points of local maxima of f

Harris Detector: Steps

Properties of the Harris corner detector Rotation invariant? Scale invariant? Yes

Properties of the Harris corner detector Rotation invariant? Scale invariant? All points will be classified as edges Corner ! Yes No

Scale invariant interest points How can we independently select interest points in each image, such that the detections are repeatable across different scales?

Automatic scale selection Intuition: Find scale that gives local maxima of some function f in both position and scale. f region size Image 1 f region size Image 2 s1s1 s2s2

What can be the “signature” function?

Recall: Edge detection f Source: S. Seitz Edge Derivative of Gaussian Edge = maximum of derivative

f Edge Second derivative of Gaussian (Laplacian) Edge = zero crossing of second derivative Source: S. Seitz Recall: Edge detection

From edges to blobs Edge = ripple Blob = superposition of two ripples Spatial selection: the magnitude of the Laplacian response will achieve a maximum at the center of the blob, provided the scale of the Laplacian is “matched” to the scale of the blob maximum Slide credit: Lana Lazebnik

Blob detection in 2D Laplacian of Gaussian: Circularly symmetric operator for blob detection in 2D

Blob detection in 2D: scale selection Laplacian-of-Gaussian = “blob” detector filter scales img1 img2 img3 Bastian Leibe

Blob detection in 2D We define the characteristic scale as the scale that produces peak of Laplacian response characteristic scale Slide credit: Lana Lazebnik

Example Original image at ¾ the size Kristen Grauman

Original image at ¾ the size Kristen Grauman

      List of (x, y, σ) scale Scale invariant interest points Interest points are local maxima in both position and scale. Squared filter response maps

Scale-space blob detector: Example Image credit: Lana Lazebnik

We can approximate the Laplacian with a difference of Gaussians; more efficient to implement. (Laplacian) (Difference of Gaussians) Technical detail

Local features: main components 1)Detection: Identify the interest points 2)Description :Extract vector feature descriptor surrounding each interest point. 3)Matching: Determine correspondence between descriptors in two views

Geometric transformations e.g. scale, translation, rotation

Photometric transformations Figure from T. Tuytelaars ECCV 2006 tutorial

Raw patches as local descriptors The simplest way to describe the neighborhood around an interest point is to write down the list of intensities to form a feature vector. But this is very sensitive to even small shifts, rotations.

SIFT descriptor [Lowe 2004] Use histograms to bin pixels within sub-patches according to their orientation. 0 2  Why subpatches? Why does SIFT have some illumination invariance?

CSE 576: Computer Vision Making descriptor rotation invariant Image from Matthew Brown Rotate patch according to its dominant gradient orientation This puts the patches into a canonical orientation.

Extraordinarily robust matching technique Can handle changes in viewpoint Up to about 60 degree out of plane rotation Can handle significant changes in illumination Sometimes even day vs. night (below) Fast and efficient—can run in real time Lots of code available Steve Seitz SIFT descriptor [Lowe 2004]

Example NASA Mars Rover images

with SIFT feature matches Figure by Noah Snavely Example

SIFT properties Invariant to –Scale –Rotation Partially invariant to –Illumination changes –Camera viewpoint –Occlusion, clutter

Local features: main components 1)Detection: Identify the interest points 2)Description :Extract vector feature descriptor surrounding each interest point. 3)Matching: Determine correspondence between descriptors in two views

Matching local features Kristen Grauman

Matching local features ? To generate candidate matches, find patches that have the most similar appearance (e.g., lowest SSD) Simplest approach: compare them all, take the closest (or closest k, or within a thresholded distance) Image 1 Image 2 Kristen Grauman

Ambiguous matches At what SSD value do we have a good match? To add robustness to matching, can consider ratio : distance to best match / distance to second best match If low, first match looks good. If high, could be ambiguous match. Image 1 Image 2 ???? Kristen Grauman

Matching SIFT Descriptors Nearest neighbor (Euclidean distance) Threshold ratio of nearest to 2 nd nearest descriptor Lowe IJCV 2004

Recap: robust feature-based alignment Source: L. Lazebnik

Recap: robust feature-based alignment Extract features Source: L. Lazebnik

Recap: robust feature-based alignment Extract features Compute putative matches Source: L. Lazebnik

Recap: robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize transformation T (small group of putative matches that are related by T) Source: L. Lazebnik

Recap: robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize transformation T (small group of putative matches that are related by T) Verify transformation (search for other matches consistent with T) Source: L. Lazebnik

Recap: robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize transformation T (small group of putative matches that are related by T) Verify transformation (search for other matches consistent with T) Source: L. Lazebnik

Applications of local invariant features Wide baseline stereo Motion tracking Panoramas Mobile robot navigation 3D reconstruction Recognition …

Automatic mosaicing

Wide baseline stereo [Image from T. Tuytelaars ECCV 2006 tutorial]

Recognition of specific objects, scenes Rothganger et al Lowe 2002 Schmid and Mohr 1997 Sivic and Zisserman, 2003 Kristen Grauman

Summary Interest point detection –Harris corner detector –Laplacian of Gaussian, automatic scale selection Invariant descriptors –Rotation according to dominant gradient direction –Histograms for robustness to small shifts and translations (SIFT descriptor)