776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.

Slides:



Advertisements
Similar presentations
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Advertisements

Feature Matching and Robust Fitting Computer Vision CS 143, Brown James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial.
Photo by Carl Warner.
776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)
Fitting: The Hough transform
CS 558 C OMPUTER V ISION Lecture VIII: Fitting, RANSAC, and Hough Transform Adapted from S. Lazebnik.
Multiple View Geometry
Computer Vision Fitting Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel, A. Zisserman,...
Computer Vision Fitting Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel, A. Zisserman,...
Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features.
Fitting: Concepts and recipes. Review: The Hough transform What are the steps of a Hough transform? What line parametrization is best for the Hough transform?
Lecture 10: Cameras CS4670: Computer Vision Noah Snavely Source: S. Lazebnik.
Fitting a Model to Data Reading: 15.1,
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 8: Image Alignment and RANSAC
Fitting: The Hough transform
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Fitting.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Fitting & Matching Lectures 7 & 8 – Prof. Fergus Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Fitting and Registration
CSE 185 Introduction to Computer Vision
Computer Vision - Fitting and Alignment
Computer Vision More Image Features Hyunki Hong School of Integrative Engineering.
Fitting and Registration Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/14/12.
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Fitting & Matching Lecture 4 – Prof. Bregler Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Photo by Carl Warner. Feature Matching and Robust Fitting Computer Vision James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe.
Object Detection 01 – Advance Hough Transformation JJCAO.
CS 1699: Intro to Computer Vision Matching and Fitting Prof. Adriana Kovashka University of Pittsburgh September 29, 2015.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Fitting: The Hough transform
Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Fitting Thursday, Sept 24 Kristen Grauman UT-Austin.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
Object Detection 01 – Basic Hough Transformation JJCAO.
CSE 185 Introduction to Computer Vision Feature Matching.
776 Computer Vision Jan-Michael Frahm Spring 2012.
CS 2750: Machine Learning Linear Regression Prof. Adriana Kovashka University of Pittsburgh February 10, 2016.
Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.
Fitting.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
CS 558 C OMPUTER V ISION Fitting, RANSAC, and Hough Transfrom Adapted from S. Lazebnik.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Segmentation by fitting a model: Hough transform and least squares
Fitting: Voting and the Hough Transform
Fitting.
Line Fitting James Hayes.
Fitting: The Hough transform
Exercise Class 11: Robust Tecuniques RANSAC, Hough Transform
Lecture 7: Image alignment
CS 2750: Machine Learning Linear Regression
A special case of calibration
Fitting.
Photo by Carl Warner.
Prof. Adriana Kovashka University of Pittsburgh October 19, 2016
Photo by Carl Warner.
Segmentation by fitting a model: robust estimators and RANSAC
Fitting CS 678 Spring 2018.
776 Computer Vision Jan-Michael Frahm.
Introduction to Sensor Interpretation
CSE 185 Introduction to Computer Vision
Introduction to Sensor Interpretation
Calibration and homographies
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Presentation transcript:

776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013

ent776-Spring2014.pdf Dolly Zoom

Sample Dolly Zoom Video

Radial Distortion

Brown’s distortion model o accounts for radial distortion o accounts for tangential distortion (distortion caused by lens placement errors) typically K 1 is used or K 1, K 2, K 3, P 1, P 2 (x u, y u ) undistorted image point as in ideal pinhole camera (x d,y d ) distorted image point of camera with radial distortion (x c,y c ) distortion center K n n-th radial distortion coefficient P n n-th tangential distortion coefficient

Last Class There are two principled ways for finding correspondences: 1.Matching o Independent detection of features in each frame o Correspondence search over detected features +Extends to large transformations between images -High error rate for correspondences 2.Tracking o Detect features in one frame o Retrieve same features in the next frame by searching for equivalent feature (tracking) +Very precise correspondences +High percentage of correct correspondences -Only works for small changes in between frames

RANSAC Robust fitting can deal with a few outliers – what if we have very many? Random sample consensus (RANSAC): Very general framework for model fitting in the presence of outliers Outline o Choose a small subset of points uniformly at random o Fit a model to that subset o Find all remaining points that are “close” to the model and reject the rest as outliers o Do this many times and choose the best model M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp , 1981.Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography

RANSAC for line fitting example

Least-squares fit

RANSAC for line fitting example 1.Randomly select minimal subset of points

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function Source: R. Raguram

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop

17 RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop

18 RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop Uncontaminated sample

RANSAC for line fitting example 1.Randomly select minimal subset of points 2.Hypothesize a model 3.Compute error function 4.Select points consistent with model 5.Repeat hypothesize-and- verify loop

RANSAC for line fitting Repeat N times: Draw s points uniformly at random Fit line to these s points Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t ) If there are d or more inliers, accept the line and refit using all inliers

Choosing the parameters Initial number of points s o Typically minimum number needed to fit the model Distance threshold t o Choose t so probability for inlier is p (e.g. 0.95) o Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N o Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)

Choosing the parameters Initial number of points s o Typically minimum number needed to fit the model Distance threshold t o Choose t so probability for inlier is p (e.g. 0.95) o Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N o Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) proportion of outliers e s5%10%20%25%30%40%50%

Choosing the parameters Initial number of points s o Typically minimum number needed to fit the model Distance threshold t o Choose t so probability for inlier is p (e.g. 0.95) o Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N o Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)

Choosing the parameters Initial number of points s o Typically minimum number needed to fit the model Distance threshold t o Choose t so probability for inlier is p (e.g. 0.95) o Zero-mean Gaussian noise with std. dev. σ: t 2 =3.84σ 2 Number of samples N o Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Consensus set size d o Should match expected inlier ratio

Adaptively determining the number of samples Inlier ratio e is often unknown a priori, so pick worst case, e.g. 10%, and adapt if more inliers are found, e.g. 80% would yield e=0.2 Adaptive procedure: o N=∞, sample_count =0 o While N >sample_count Choose a sample and count the number of inliers Set e = 1 – (number of inliers)/(total number of points) Recompute N from e: Increment the sample_count by 1

RANSAC pros and cons Pros o Simple and general o Applicable to many different problems o Often works well in practice Cons o Lots of parameters to tune o Doesn’t work well for low inlier ratios (too many iterations, or can fail completely) o Can’t always get a good initialization of the model based on the minimum number of samples

Universal Framework Random Sample Consenus Raguram et al. PAMI 2013

Threshold free Robust Estimation Homography 1.Match points 2.Run RANSAC, threshold ≈ pixels Common approach: Problem: Robust Estimation o Estimation of model parameters in the presence of noise and outliers

Motivation 1.Match points 2.Run RANSAC 3D similarity, threshold = ? 29

Motivation Robust estimation algorithms often require data and/or model specific threshold settings o Performance degrades when parameter settings deviate from the “true” values (Torr and Zisserman 2000, Choi and Medioni 2009) t = 0.01t = t = 5.0 t =

Contributions Threshold-free robust estimation o No user-supplied inlier threshold o No assumption of inlier ratio o Simple, efficient algorithm 31

RECON Instead of per-model or point-based residual analysis We inspect pairs of models Inlier Outlier Sort points by distance to model “Inlier” models 32

RECON Instead of per-model or point-based residual analysis We inspect pairs of models Inlier Outlier “Outlier” models Sort points by distance to model 33

RECON Instead of per-model or point-based residual analysis We inspect pairs of models Inlier models Outlier models “Consistent” behaviour Random permutations of data points 34

RECON Instead of per-model or point-based residual analysis We inspect pairs of models “Happy families are all alike; every unhappy family is unhappy in its own way” - Leo Tolstoy, Anna Karenina Inlier models Outlier models “Consistent” behaviour Random permutations of data points 35

RECON Instead of per-model or point-based residual analysis We inspect pairs of models “Happy families are all alike; every unhappy family is unhappy in its own way” - Leo Tolstoy, Anna Karenina Inlier models “Consistent” behaviour Outlier models Random permutations of data points 36

RECON Instead of per-model or point-based residual analysis We inspect pairs of models “Consistent” behaviourRandom permutations of data points “Happy families are all alike; every unhappy family is unhappy in its own way” - Leo Tolstoy, Anna Karenina Outlier models Inlier models 37

How can we characterize this behaviour? Assuming that measurement error is GaussianRECON Inlier residuals are distributed If σ is known, can compute threshold t that finds some fraction α of all inliers Typically set to 0.95 or 0.99 Large overlap At the “true” threshold, inlier models capture a stable set of points t αIαI 38

How can we characterize this behaviour? What if σ is unknown? o Hypothesize values of noise scale σRECON t1t1 t2t2 t3t3 t4t4... tTtT t = ? 39

How can we characterize this behaviour? What if σ is unknown? o Hypothesize values of noise scale σRECON t θ t : fraction of common points θtθt t 40

How can we characterize this behaviour? What if σ is unknown? o Hypothesize values of noise scale σRECON θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt t 41

At the “true” value of the threshold 1. A fraction ≈ α 2 of the inliers will be common to inlier modelsRECON θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt α2α2 t 42

RECON θtθt t θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt θtθt At the “true” value of the threshold 1. A fraction ≈ α 2 of the inliers will be common to inlier models 2. The inlier residuals correspond to the same underlying distribution 43

At the “true” value of the threshold 1. A fraction ≈ α 2 of the inliers will be common to inlier models 2. The inlier residuals correspond to the same underlying distributionRECON θtθt t Overlap is low at the true threshold Overlap can be high for a large overestimate of the noise scale Residuals are unlikely to correspond to the same underlying distribution Outlier models 44

At the “true” value of the threshold 1. A fraction ≈ α 2 of the inliers will be common to inlier models 2. The inlier residuals correspond to the same underlying distributionRECON 45

1. Hypothesis generation - Generate model M i from random minimal sample 2. Model evaluation - Compute and store residuals of M i to all data points 3. Test α – con sistency - For a pair of models M i and M j Check if overlap ≈ α 2 for some t Check if residuals come from the same distribution Residual Consensus: Algorithm Similar to RANSA C 46

1. Hypothesis generation - Generate model M i from random minimal sample 2. Model evaluation - Compute and store residuals of M i to all data points 3. Test α – con sistency - For a pair of models M i and M j Check if overlap ≈ α 2 for some t Check if residuals come from the same distribution Residual Consensus: Algorithm Find consistent pairs of models Efficient implementation: use residual ordering Single pass over sorted residuals 47

1. Hypothesis generation - Generate model M i from random minimal sample 2. Model evaluation - Compute and store residuals of M i to all data points 3. Test α – con sistency - For a pair of models M i and M j Check if overlap ≈ α 2 for some t Check if residuals come from the same distribution Residual Consensus: Algorithm Find consistent pairs of models Two-sample Kolmogorov-Smirnov test Reject null hypothesis at level β if 48

Residual Consensus: Algorithm 1. Hypothesis generation - Generate model M i from random minimal sample 2. Model evaluation - Compute and store residuals of M i to all data points 3. Test α – con sistency - For a pair of models M i and M j Check if overlap ≈ α 2 for some t Check if residuals come from the same distribution Repeat until K pairwise- consistent models are found 49

Residual Consensus: Algorithm 1. Hypothesis generation - Generate model M i from random minimal sample 2. Model evaluation - Compute and store residuals of M i to all data points 3. Test α – con sistency - For a pair of models M i and M j Check if overlap ≈ α 2 for some t Check if residuals come from the same distribution 4. Refine solution - Recompute final model and inliers 50 Repeat until K pairwise- consistent models are found

Experiments Real data 51

RECON Conclusion RECON is o Flexible: no data/model dependent threshold settings Inlier threshold Inlier ratio o Accurate: achieves results of the same quality as RANSAC without apriori knowledge of the noise parameters o Efficient: adapts to the inlier ratio in the data o Extensible: Can be combined with other RANSAC optimizations (non-uniform sampling, degeneracy handling, etc.) o Code:

Fitting: The Hough transform

Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data doesn’t matter as long as there are enough features remaining to agree on a good model

Hough transform An early type of voting scheme General outline: o Discretize parameter space into bins o For each feature point in the image, put a vote in every bin in the parameter space that could have generated this point o Find bins that have the most votes P.V.C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959 Image space Hough parameter space

Parameter space representation A line in the image corresponds to a point in Hough space Image spaceHough parameter space

Parameter space representation What does a point (x 0, y 0 ) in the image space map to in the Hough space? Image spaceHough parameter space

Parameter space representation What does a point (x 0, y 0 ) in the image space map to in the Hough space? o Answer: the solutions of b = –x 0 m + y 0 o This is a line in Hough space Image spaceHough parameter space

Parameter space representation Where is the line that contains both (x 0, y 0 ) and (x 1, y 1 )? Image spaceHough parameter space (x 0, y 0 ) (x 1, y 1 ) b = –x 1 m + y 1 Slide credit: S. Lazebnik

Parameter space representation Where is the line that contains both (x 0, y 0 ) and (x 1, y 1 )? o It is the intersection of the lines b = –x 0 m + y 0 and b = –x 1 m + y 1 Image spaceHough parameter space (x 0, y 0 ) (x 1, y 1 ) b = –x 1 m + y 1

Problems with the (m,b) space: o Unbounded parameter domain o Vertical lines require infinite m Parameter space representation

Problems with the (m,b) space: o Unbounded parameter domain o Vertical lines require infinite m Alternative: polar representation Parameter space representation Each point will add a sinusoid in the ( ,  ) parameter space

Algorithm outline Initialize accumulator H to all zeros For each edge point (x,y) in the image For θ = 0 to 180 ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end end Find the value(s) of (θ, ρ) where H(θ, ρ) is a local maximum o The detected line in the image is given by ρ = x cos θ + y sin θ ρ θ

features votes Basic illustration Slide credit: S. Lazebnik

Square Other shapes

Circle Other shapes

Several lines Slide credit: S. Lazebnik

A more complicated image

featuresvotes Effect of noise Slide credit: S. Lazebnik

featuresvotes Effect of noise Peak gets fuzzy and hard to locate Slide credit: S. Lazebnik

Effect of noise Number of votes for a line of 20 points with increasing noise: Slide credit: S. Lazebnik

Random points Uniform noise can lead to spurious peaks in the array featuresvotes Slide credit: S. Lazebnik

Random points As the level of uniform noise increases, the maximum number of votes increases too: Slide credit: S. Lazebnik

Dealing with noise Choose a good grid / discretization o Too coarse: large votes obtained when too many different lines correspond to a single bucket o Too fine: miss lines because some points that are not exactly collinear cast votes for different buckets Increment neighboring bins (smoothing in accumulator array) Try to get rid of irrelevant features o Take only edge points with significant gradient magnitude Slide credit: S. Lazebnik

Incorporating image gradients Recall: when we detect an edge point, we also know its gradient direction But this means that the line is uniquely determined! Modified Hough transform: For each edge point (x,y) θ = gradient orientation at (x,y) ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end Slide credit: S. Lazebnik

Hough transform for circles How many dimensions will the parameter space have? Given an oriented edge point, what are all possible bins that it can vote for? Slide credit: S. Lazebnik

Hough transform for circles x y (x,y) x y r image space Hough parameter space Slide credit: S. Lazebnik

Hough transform for circles Conceptually equivalent procedure: for each (x,y,r), draw the corresponding circle in the image and compute its “support” x y r Is this more or less efficient than voting with features? Slide credit: S. Lazebnik

Hough transform: Discussion Pros o Can deal with non-locality and occlusion o Can detect multiple instances of a model o Some robustness to noise: noise points unlikely to contribute consistently to any single bin Cons o Complexity of search time increases exponentially with the number of model parameters o Non-target shapes can produce spurious peaks in parameter space o It’s hard to pick a good grid size Hough transform vs. RANSAC Slide credit: S. Lazebnik