Presentation is loading. Please wait.

Presentation is loading. Please wait.

Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25.

Similar presentations


Presentation on theme: "Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25."— Presentation transcript:

1 Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25

2 Announcements Read Forsyth & Ponce Chapters 11- 11.1, 12-12.1.1, and Hartley & Zisserman Chapter 11-11.2 on triangulation and structure computation HW 4 is due Monday –Affine rectification: Choose H A carefully for texture mapping

3 Outline RANSAC

4 The Problem with Outliers Least squares is a technique for fitting a model to data that exhibit a Gaussian error distribution When there are outliers—data points that are not drawn from the same distribution— the estimation result can be biased Line fitting using regression is biased by outliers from Hartley & Zisserman

5 Robust Estimation View estimation as a two-stage process: –Classify data points as outliers or inliers –Fit model to inliers

6 RANSAC (RANdom SAmple Consensus) 1.Randomly choose minimal subset of data points necessary to fit model (a sample) 2.Points within some distance threshold t of model are a consensus set. Size of consensus set is model’s support 3.Repeat for N samples; model with biggest support is most robust fit –Points within t of best model are inliers –Fit final model to all inliers Two samples and their supports for line-fitting from Hartley & Zisserman

7 RANSAC: Picking the Distance Threshold t Usually chosen empirically But…when measurement error is known to be Gaussian with mean ¹ and variance ¾ 2 : –Sum of squared errors follows a  2 distribution with m DOF, where m is the DOF of the error measure (the codimension) E.g., m = 1 for line fitting because error is perpendicular distance E.g., m = 2 for point distance Examples for probability ® = 0.95 that point is inlier m Model t2t2 1Line, fundamental matrix 3.84 ¾ 2 2Homography, camera matrix 5.99 ¾ 2

8 RANSAC: How many samples? Using all possible samples is often infeasible Instead, pick N to assure probability p of at least one sample (containing s points) being all inliers where ² is probability that point is an outlier Typically p = 0.99

9 RANSAC: Computed N ( p = 0.99 ) Sample size Proportion of outliers ² s 5%10%20%25%30%40%50% 2235671117 33479111935 435913173472 54612172657146 64716243797293 748203354163588 8592644782721177 adapted from Hartley & Zisserman

10 Example: N for the line-fitting problem n = 12 points Minimal sample size m = 2 2 outliers ) ² = 1/6 ¼ 20% So N = 5 gives us a 99% chance of getting a pure-inlier sample –Compared to N = 66 by trying every pair of points from Hartley & Zisserman

11 RANSAC: Determining N adaptively If the outlier fraction ² is not known initially, it can be estimated iteratively: 1.Set N = 1 and outlier fraction to worst case—e.g., ² = 0.5 2.For every sample, count number of inliers (support) 3.Update outlier fraction if lower than previous estimate: ² = 1 ¡ (number of inliers) / (total number of points) 1.Set new value of N using formula 2.If number of samples checked so far exceeds current N, stop

12 After RANSAC RANSAC divides data into inliers and outliers and yields estimate computed from minimal set of inliers with greatest support Improve this initial estimate with ML estimation over all inliers (i.e., standard minimization) But this may change inliers, so alternate fitting with re-classification as inlier/outlier from Hartley & Zisserman

13 Automatic Fundamental Matrix F Estimation How to get correct correspondences without human intervention? from Hartley & Zisserman

14 Automatic F Estimation: Feature Extraction Find features in pair of images using corner detection— e.g., minimum eigenvalue over threshold of: from Hartley & Zisserman

15 Automatic F Estimation: Finding Feature Matches Best match over threshold within square search window (here §300 pixels) using SSD or normalized cross-correlation from Hartley & Zisserman

16 Automatic F Estimation: Finding Feature Matches Best match over threshold within square search window (here §300 pixels) using SSD or normalized cross-correlation from Hartley & Zisserman

17 Automatic F Estimation: Initial Match Hypotheses 188 matched features in left image pointing to locations of corresponding right image features from Hartley & Zisserman

18 Automatic F Estimation: Applying RANSAC Sampling –Size: Recall that the DOF of the fundamental matrix is 7, so s = 7 : 9 entries in 3 x 3 matrix – homogeneous scaling – rank 2 constraint –Choice Disregard degenerate configurations Ensure points have good spatial distribution over image Distance measure –Obvious choice is symmetric epipolar distance already defined –Better choice is reprojection error or approximation of reprojection error Involves simultaneous estimation of F and 3-D point locations

19 Automatic F Estimation: Outliers & Inliers after RANSAC 407 samples used with t = 1.25 pixels –RMS pixel error with estimated F was 0.34 89 outliers ( ² = 0.47 ) 99 inliers from Hartley & Zisserman


Download ppt "Robust Estimation Course web page: vision.cis.udel.edu/~cv April 23, 2003  Lecture 25."

Similar presentations


Ads by Google