776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)

Slides:



Advertisements
Similar presentations
Feature Matching and Robust Fitting Computer Vision CS 143, Brown James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial.
Advertisements

TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Fitting a transformation: feature-based alignment
Image alignment Image from
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Image alignment Image from
Robust and large-scale alignment Image from
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
CS 558 C OMPUTER V ISION Lecture VIII: Fitting, RANSAC, and Hough Transform Adapted from S. Lazebnik.
Multiple View Geometry
Image alignment.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.
Fitting: Concepts and recipes. Review: The Hough transform What are the steps of a Hough transform? What line parametrization is best for the Hough transform?
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Fitting a Model to Data Reading: 15.1,
Scale Invariant Feature Transform (SIFT)
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 8: Image Alignment and RANSAC
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Random Sample Consensus: A Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography Martin A. Fischler, Robert C. Bolles.
Fitting.
Fitting & Matching Lectures 7 & 8 – Prof. Fergus Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Image Stitching Ali Farhadi CSE 455
Image alignment.
CSE 185 Introduction to Computer Vision
Chapter 6 Feature-based alignment Advanced Computer Vision.
Fitting and Registration Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/14/12.
Fitting & Matching Lecture 4 – Prof. Bregler Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Example: line fitting. n=2 Model fitting Measure distances.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
CSE 185 Introduction to Computer Vision Feature Matching.
Local features: detection and description
776 Computer Vision Jan-Michael Frahm Spring 2012.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Fitting.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
CS 558 C OMPUTER V ISION Fitting, RANSAC, and Hough Transfrom Adapted from S. Lazebnik.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT.
776 Computer Vision Jan-Michael Frahm Spring 2012.
SIFT Scale-Invariant Feature Transform David Lowe
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Fitting a transformation: feature-based alignment
TP12 - Local features: detection and description
Lecture 7: Image alignment
Local features: detection and description May 11th, 2017
Feature description and matching
A special case of calibration
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Object recognition Prof. Graeme Bailey
Fitting.
Segmentation by fitting a model: robust estimators and RANSAC
Fitting CS 678 Spring 2018.
SIFT.
776 Computer Vision Jan-Michael Frahm.
Computational Photography
Calibration and homographies
Back to equations of geometric transformations
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)

Image alignment Image from

A look into the past

A look into the past Leningrad during the blockade

Bing streetside images application-streetside-photos.aspx

Image alignment: Applications Panorama stitching Recognition of object instances

Image alignment: Challenges Small degree of overlap Occlusion, clutter Intensity changes

Image alignment Two families of approaches: o Direct (pixel-based) alignment Search for alignment where most pixels agree o Feature-based alignment Search for alignment where extracted features agree Can be verified using pixel-based alignment

2D transformation models Similarity (translation, scale, rotation) Affine Projective (homography)

Let’s start with affine transformations Simple fitting procedure (linear least squares) Approximates viewpoint changes for roughly planar objects and roughly orthographic cameras Can be used to initialize fitting for more complex models

Fitting an affine transformation Assume we know the correspondences, how do we get the transformation?

Fitting an affine transformation Linear system with six unknowns Each match gives us two linearly independent equations: need at least three to solve for the transformation parameters

Fitting a plane projective transformation Homography: plane projective transformation (transformation taking a quad to another arbitrary quad)

Homography The transformation between two views of a planar surface The transformation between images from two cameras that share the same center

Application: Panorama stitching Source: Hartley & Zisserman

Fitting a homography Recall: homogeneous coordinates Converting to homogeneous image coordinates Converting from homogeneous image coordinates

Fitting a homography Recall: homogeneous coordinates Equation for homography: Converting to homogeneous image coordinates Converting from homogeneous image coordinates

Fitting a homography Equation for homography: 3 equations, only 2 linearly independent Unknown scale Scaled versions of same vector Rows of H Each element is a 3-vector Rows of H formed into 9x1 column vector

Direct linear transform H has 8 degrees of freedom (9 parameters, but scale is arbitrary) One match gives us two linearly independent equations Four matches needed for a minimal solution (null space of 8x9 matrix) More than four: homogeneous least squares

Robust feature-based alignment So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align What if we don’t know the correspondences?

Robust feature-based alignment So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align What if we don’t know the correspondences? ?

Robust feature-based alignment

Extract features

Robust feature-based alignment Extract features Compute putative matches

Alignment as fitting Least Squares Total Least Squares

Least Squares Line Fitting Data: (x 1, y 1 ), …, (x n, y n ) Line equation: y i = m x i + b Find ( m, b ) to minimize (x i, y i ) y=mx+b

Problem with “Vertical” Least Squares Not rotation-invariant Fails completely for vertical lines

Total Least Squares Distance between point (x i, y i ) and line ax+by=d (a 2 +b 2 =1): |ax i + by i – d| Find (a, b, d) to minimize the sum of squared perpendicular distances (x i, y i ) ax+by=d Unit normal: N=(a, b)

Least Squares: Robustness to Noise Least squares fit to the red points:

Least Squares: Robustness to Noise Least squares fit with an outlier: Problem: squared error heavily penalizes outliers

Robust Estimators General approach: find model parameters θ that minimize r i (x i, θ) – residual of i th point w.r.t. model parameters θ ρ – robust function with scale parameter σ The robust function ρ behaves like squared distance for small values of the residual u but saturates for larger values of u

Choosing the scale: Just right The effect of the outlier is minimized

The error value is almost the same for every point and the fit is very poor Choosing the scale: Too small

Choosing the scale: Too large Behaves much the same as least squares

RANSAC Robust fitting can deal with a few outliers – what if we have very many? Random sample consensus (RANSAC): Very general framework for model fitting in the presence of outliers Outline o Choose a small subset of points uniformly at random o Fit a model to that subset o Find all remaining points that are “close” to the model and reject the rest as outliers o Do this many times and choose the best model M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp , 1981.Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography

RANSAC for line fitting example Source: R. Raguram

RANSAC for line fitting example Least-squares fit Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Source: R. Raguram

43 RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Source: R. Raguram

44 RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Uncontaminated sample Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Source: R. Raguram

RANSAC for line fitting Repeat N times: Draw s points uniformly at random Fit line to these s points Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t ) If there are d or more inliers, accept the line and refit using all inliers

Choosing the parameters Initial number of points s o Typically minimum number needed to fit the model Distance threshold t o Choose t so probability for inlier is p (e.g. 0.95) o Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N o Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Source: M. Pollefeys

Choosing the parameters proportion of outliers e s5%10%20%25%30%40%50% Source: M. Pollefeys Initial number of points s Typically minimum number needed to fit the model Distance threshold t Choose t so probability for inlier is p (e.g. 0.95) Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)

Adaptively determining the number of samples Inlier ratio e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2 Adaptive procedure: o N=∞, sample_count =0 o While N >sample_count Choose a sample and count the number of inliers If inlier ratio is highest of any found so far, set e = 1 – (number of inliers)/(total number of points) Recompute N from e: Increment the sample_count by 1 Source: M. Pollefeys

RANSAC pros and cons Pros o Simple and general o Applicable to many different problems o Often works well in practice Cons o Lots of parameters to tune o Doesn’t work well for low inlier ratios (too many iterations, or can fail completely) o Can’t always get a good initialization of the model based on the minimum number of samples

Alignment as fitting Previous lectures: fitting a model to features in one image Find model M that minimizes M xixi

Alignment as fitting Previous lectures: fitting a model to features in one image Alignment: fitting a model to a transformation between pairs of features (matches) in two images Find model M that minimizes Find transformation T that minimizes M xixi T xixi xixi '

Robust feature-based alignment Extract features Compute putative matches Loop: o Hypothesize transformation T

Robust feature-based alignment Extract features Compute putative matches Loop: o Hypothesize transformation T o Verify transformation (search for other matches consistent with T)

Robust feature-based alignment Extract features Compute putative matches Loop: o Hypothesize transformation T o Verify transformation (search for other matches consistent with T)

RANSAC The set of putative matches contains a very high percentage of outliers RANSAC loop: 1.Randomly select a seed group of matches 2.Compute transformation from seed group 3.Find inliers to this transformation 4.If the number of inliers is sufficiently large, re- compute least-squares estimate of transformation on all of the inliers Keep the transformation with the largest number of inliers

RANSAC example: Translation Putative matches

RANSAC example: Translation Select one match, count inliers

RANSAC example: Translation Select one match, count inliers

RANSAC example: Translation Select translation with the most inliers

Summary Detect feature points (to be covered later) Describe feature points (to be covered later) Match feature points (to be covered later) RANSAC o Transformation model (affine, homography, etc.)

Advice Use the minimum model necessary to define the transformation on your data o Affine in favor of homography o Homography in favor of 3D transform (not discussed yet) o Make selection based on your data Fewer RANSAC iterations Fewer degrees of freedom  less ambiguity or sensitivity to noise

Generating putative correspondences ?

Need to compare feature descriptors of local patches surrounding interest points ( ) = ? feature descriptor ?

Simplest descriptor: vector of raw intensity values How to compare two such vectors? o Sum of squared differences (SSD) Not invariant to intensity change o Normalized correlation Invariant to affine intensity change Feature descriptors

Disadvantage of intensity vectors as descriptors Small deformations can affect the matching score a lot

Descriptor computation: o Divide patch into 4x4 sub-patches o Compute histogram of gradient orientations (8 reference angles) inside each sub-patch o Resulting descriptor: 4x4x8 = 128 dimensions Feature descriptors: SIFT David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp , 2004."Distinctive image features from scale-invariant keypoints.”

Descriptor computation: o Divide patch into 4x4 sub-patches o Compute histogram of gradient orientations (8 reference angles) inside each sub-patch o Resulting descriptor: 4x4x8 = 128 dimensions Advantage over raw vectors of pixel values o Gradients less sensitive to illumination change o Pooling of gradients over the sub-patches achieves robustness to small shifts, but still preserves some spatial information Feature descriptors: SIFT David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp , 2004."Distinctive image features from scale-invariant keypoints.”

Feature matching ? Generating putative matches: for each patch in one image, find a short list of patches in the other image that could match it based solely on appearance

Feature space outlier rejection How can we tell which putative matches are more reliable? Heuristic: compare distance of nearest neighbor to that of second nearest neighbor o Ratio of closest distance to second-closest distance will be high for features that are not distinctive David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp , 2004."Distinctive image features from scale-invariant keypoints.” Threshold of 0.8 provides good separation