Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing F Class 8 Read notes Section 4.2. C1C1 C2C2 l2l2  l1l1 e1e1 e2e2 Fundamental matrix (3x3 rank 2 matrix) 1.Computable from corresponding points.

Similar presentations


Presentation on theme: "Computing F Class 8 Read notes Section 4.2. C1C1 C2C2 l2l2  l1l1 e1e1 e2e2 Fundamental matrix (3x3 rank 2 matrix) 1.Computable from corresponding points."— Presentation transcript:

1 Computing F Class 8 Read notes Section 4.2

2 C1C1 C2C2 l2l2  l1l1 e1e1 e2e2 Fundamental matrix (3x3 rank 2 matrix) 1.Computable from corresponding points 2.Simplifies matching 3.Allows to detect wrong matches 4.Related to calibration Underlying structure in set of matches for rigid scenes l2l2 C1C1 m1m1 L1L1 m2m2 L2L2 M C2C2 m1m1 m2m2 C1C1 C2C2 l2l2  l1l1 e1e1 e2e2 m1m1 L1L1 m2m2 L2L2 M l2l2 lT1lT1 Epipolar geometry Canonical representation:

3 Other entities besides points? Lines give no constraint for two view geometry (but will for three and more views) Curves and surfaces yield some constraints related to tangency (e.g. Sinha et al. CVPR’04)

4 The projective reconstruction theorem If a set of point correspondences in two views determine the fundamental matrix uniquely, then the scene and cameras may be reconstructed from these correspondences alone, and any two such reconstructions from these correspondences are projectively equivalent allows reconstruction from pair of uncalibrated images!

5 Today’s menu Computation of F Linear (8-point) Minimal (7-point) Robust (RANSAC) Non-linear refinement (MLE, …) Practical approach

6 Epipolar geometry: basic equation separate known from unknown (data) (unknowns) (linear)

7 the singularity constraint SVD from linearly computed F matrix (rank 3) Compute closest rank-2 approximation

8

9 the minimum case – 7 point correspondences one parameter family of solutions but F 1 + F 2 not automatically rank 2

10 F1F1 F2F2 F 33 F 7pts (obtain 1 or 3 solutions) (cubic equation) the minimum case – impose rank 2 Compute possible as eigenvalues of (only real solutions are potential solutions)

11 ~10000 ~100 1 ! Orders of magnitude difference between column of data matrix  least-squares yields poor results the NOT normalized 8-point algorithm

12 Transform image to ~[-1,1]x[-1,1] (0,0) (700,500) (700,0) (0,500) (1,-1) (0,0) (1,1)(-1,1) (-1,-1) normalized least squares yields good results (Hartley, PAMI´97) the normalized 8-point algorithm

13 Robust estimation What if set of matches contains gross outliers? (to keep things simple let’s consider line fitting first)

14 RANSAC Objective Robust fit of model to data set S which contains outliers Algorithm (i)Randomly select a sample of s data points from S and instantiate the model from this subset. (ii)Determine the set of data points S i which are within a distance threshold t of the model. The set S i is the consensus set of samples and defines the inliers of S. (iii)If the subset of S i is greater than some threshold T, re- estimate the model using all the points in S i and terminate (iv)If the size of S i is less than T, select a new subset and repeat the above. (v)After N trials the largest consensus set S i is selected, and the model is re-estimated using all the points in the subset S i

15 Distance threshold Choose t so probability for inlier is α (e.g. 0.95) Often empirically Zero-mean Gaussian noise σ then follows distribution with m =codimension of model (dimension+codimension=dimension space) CodimensionModelt 2 1line,F3.84σ 2 2H,P5.99σ 2 3T7.81σ 2

16 How many samples? Choose N so that, with probability p, at least one random sample is free from outliers. e.g. p=0.99 proportion of outliers e s5%10%20%25%30%40%50% 2235671117 33479111935 435913173472 54612172657146 64716243797293 748203354163588 8592644782721177 Note: Assumes that inliers allow to identify other inliers

17 Acceptable consensus set? Typically, terminate when inlier ratio reaches expected ratio of inliers

18 Adaptively determining the number of samples e is often unknown a priori, so pick worst case, i.e. 0, and adapt if more inliers are found, e.g. 80% would yield e =0.2 N=∞, sample_count =0 While N >sample_count repeat Choose a sample and count the number of inliers Set e=1-(number of inliers)/(total number of points) Recompute N from e Increment the sample_count by 1 Terminate

19 Other robust algorithms RANSAC maximizes number of inliers LMedS minimizes median error Not recommended: case deletion, iterative least-squares, etc. 100% 50% residual (pixels) inlier percentile 1.25

20 Non-linear refinment

21 algebraic minimization possible to iteratively minimize algebraic distance subject to det F=0 (see H&Z if interested)

22 Geometric distance Gold standard Symmetric epipolar distance

23 Gold standard Maximum Likelihood Estimation (= least-squares for Gaussian noise) Parameterize: Initialize: normalized 8-point, (P,P‘) from F, reconstruct X i Minimize cost using Levenberg-Marquardt (preferably sparse LM, e.g. see H&Z) (overparametrized)

24 Gold standard Alternative, minimal parametrization (with a=1) (note (x,y,1) and (x‘,y‘,1) are epipoles) problems: a=0  pick largest of a,b,c,d to fix to 1 epipole at infinity  pick largest of x,y,w and of x’,y’,w’ 4x3x3=36 parametrizations! reparametrize at every iteration, to be sure

25 Zhang&Loop’s approach CVIU’01

26 First-order geometric error (Sampson error) (one eq./point  JJ T scalar) (problem if some x is located at epipole) advantage: no subsidiary variables required

27 Symmetric epipolar error

28 Some experiments:

29

30

31 (for all points!) Residual error:

32 Recommendations: 1.Do not use unnormalized algorithms 2.Quick and easy to implement: 8-point normalized 3.Better: enforce rank-2 constraint during minimization 4.Best: Maximum Likelihood Estimation (minimal parameterization, sparse implementation)

33 Special case: Enforce constraints for optimal results: Pure translation (2dof), Planar motion (6dof), Calibrated case (5dof)

34 The envelope of epipolar lines What happens to an epipolar line if there is noise? Monte Carlo n =10 n =15 n =25 n =50

35 Step 1. Extract features Step 2. Compute a set of potential matches Step 3. do Step 3.1 select minimal sample (i.e. 7 matches) Step 3.2 compute solution(s) for F Step 3.3 determine inliers until  (#inliers,#samples)<95% #inliers90%80%70%60%50% #samples51335106382 Step 4. Compute F based on all inliers Step 5. Look for additional matches Step 6. Refine F based on all correct matches (generate hypothesis) (verify hypothesis) Automatic computation of F

36 Abort verification early Assume percentage of inliers p, what is the probability P to pick n or more wrong samples out of m? stop if P<0.05 (sample most probably contains outlier) (P=cum. binomial distr. funct. (m-n,m,p )) To avoid problems this requires to also verify at random! (but we already have a random sampler anyway) (inspired from Chum and Matas BMVC2002) O O O O O I O O I O O O O I I I O I I I I O O I O I I I I I I O I I I I I I O I O O I I I I… O O O O O O O O O O O I O O O O

37 restrict search range to neighborhood of epipolar line (e.g.  1.5 pixels) relax disparity restriction (along epipolar line) Finding more matches

38 Degenerate cases Planar scene Pure rotation No unique solution Remaining DOF filled by noise Use simpler model (e.g. homography) Solution 1: Model selection (Torr et al., ICCV´98, Kanatani, Akaike) Compare H and F according to expected residual error (compensate for model complexity) Solution 2: RANSAC Compare H and F according to inlier count (see next slide) Degenerate cases:

39 RANSAC for quasi-degenerate cases Full model (8pts, 1D solution) Planar model (6pts, 3D solution) Accept if large number of remaining inliers Plane+parallax model (plane+2pts) Sample for out of plane points among outliers closest rank-6 of A nx9 for all plane inliers (accept inliers to solution F) (accept inliers to solution F 1,F 2 &F 3 )

40 Absence of sufficient features (no texture) Repeated structure ambiguity (Schaffalitzky and Zisserman, BMVC‘98) Robust matcher also finds Robust matcher also finds support for wrong hypothesis support for wrong hypothesis solution: detect repetition solution: detect repetition More problems:

41 RANSAC for ambiguous matching Include multiple candidate matches in set of potential matches Select according to matching probability (~ matching score) Helps for repeated structures or scenes with similar features as it avoids an early commitment (Tordoff and Murray ECCV02)

42 geometric relations between two views is fully described by recovered 3x3 matrix F two-view geometry

43 Next class: rectification and stereo image I(x,y) image I´(x´,y´) Disparity map D(x,y) (x´,y´)=(x+D(x,y),y)


Download ppt "Computing F Class 8 Read notes Section 4.2. C1C1 C2C2 l2l2  l1l1 e1e1 e2e2 Fundamental matrix (3x3 rank 2 matrix) 1.Computable from corresponding points."

Similar presentations


Ads by Google