Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exercise Class 11: Robust Tecuniques RANSAC, Hough Transform

Similar presentations


Presentation on theme: "Exercise Class 11: Robust Tecuniques RANSAC, Hough Transform"— Presentation transcript:

1 Exercise Class 11: Robust Tecuniques RANSAC, Hough Transform

2 Continuing Ex. Class 10: Feature matching
?

3 Feature matching Given a large amount of feature descriptors, there are many approaches to find correspondence between them: Exhaustive search for each feature in one image, look at all the other features in the other image(s) Hashing compute a short descriptor from each feature vector, or hash longer descriptors (randomly) Nearest neighbor techniques k-trees and their variants

4 What about outliers? Use a Robust Method
This pair of images contains only small overlapping area. Any matching of point that is not contained in this area with other point is wrong. Moreover, not all of the interest points in the overlapping area were detected in both images. Also, there may be a wrong matching due to reasons like inappropriate measure, approximate NN etc.

5 What are robust methods?
The goal of many algorithms is parametric fitting: extract parametric information from an image Extracting edges Extracting motion model from correspondence points In real images: inaccurate data Occlusions Noise Wrong feature extraction Data is likely to include multiple models / structures Goal of robust method: tolerate outliers points outliers Pseudo- outliers An important goal of many computer vision algorithms is to extract geometric information from an image, or from image sequences. Parametric models play a vital role in this and other activities in computer vision research. When engaged in parametric fitting in a computer vision context, it is important to recognize that data obtained from the image or image sequences may be inaccurate. It is almost unavoidable that data are contaminated (due to faulty feature extraction, sensor noise, segmentation errors, etc) and it is also likely that the data will include multiple structures.

6 Example: Robust Computation of 2D Homography
from Hartley & Zisserman 500 Interest points per image 151 inliers 262 inliers 43 iterations… 117 outliers 268 assumed correspondences (Best match,SSD<20)

7 Simple Model Fit Problem
Estimate a straight line fit to a set of 2D points Find the line that minimize the sum of squared perpendicular distances, subject to the condition that no valid point deviates from this line in more than t units. Actually we have 2 problems : Fit the line to the data Classify the data into inliers and outliers

8 Simple Model Fit Problem
Estimate a straight line fit to a set of 2D points Classical techniques for parameter estimations : optimize (according to specific objective function) the fit of the model to all of the presented data. In such technique there is no internal mechanism for detection and rejection of gross errors Least Square (LS) error is not robust : one outlier point may cause a very big mistake

9 A good LS fit to a set of points.
One outlier point leads to disaster. A least squares fit. The problem is the single point on the right; the error for that point is so large that it drags the line away from the other points (the total error is lower if these points are some way from the line, because then the point on the line is not so dramatically far away)

10 Another outlier, another disaster.
Close-up of the poor fit to the “true” data.

11 Robust Norm We need to set s Parameters : unknown scale cost distance
First way of handling the problem : define a more robust error function The issue here is that one wants to replace (distance)^2 with something that looks like distance^2 for small distances, and is about constant (bounded) for large distances. We use d^2/(d^2+s^2) for some parameter s, for different values of which the curve is plotted cost

12 Robust Norm cost distance
The issue here is that one wants to replace (distance)^2 with something that looks like distance^2 for small distances, and is about constant for large distances. We use d^2/(d^2+s^2) for some parameter s, for different values of which the curve is plotted distance

13 The impact of tuning s Just right Too small Too large
Here the parameter is too small, and the error for every point looks like distance^2 Here the parameter is too large, meaning that the error value is about constant for every point, and there is very little relationship between the line and the points Too small Too large

14 Simple Model Fit Problem
Estimate a straight line fit to a set of 2D points How can we do better ? Assumption: Given the correct model, we can validate this model

15 RANSAC (RANdom SAmple Consensus)
The idea: Sample each time 2 points randomly, and check the support of the resulted model Choose model with max support Support of the line: the number of points that lie within a distance threshold Select 2 points randomly. Those points define a line. The support of this line is measured by the number of points that lie within a distance threshold Repeat the selection process number of time and choose the line with the largest support. The idea is that if one of the points is outlier that line will not gain much support. In addition choosing the line with the larger number of support favorable better fit – for example it may be that 2 inliers (like point A,D) will also yield a possible line but not an optimal one. The last stage is to re-estimate the line from the set of inliers to improve the initial estimation of line In some other methods we would have tried to make model from many points and than eliminate outliers. Here we do it in the other way round : choose a few points, define a model and enlarge the data set with consistent points when possible

16 Iterate N times: At the end: RANSAC – The Algorithm
Randomly sample S points from the data Solve the problem using the S points Check the solution using the rest of the points (the support) At the end: Use all the points which “agree” to compute the exact model.

17 Back to homography estimation
RANSAC loop: Select four feature pairs (at random) Compute homography H (exact) Compute inliers where SSD(pi’, Hpi) < ε Keep largest set of inliers Re-compute least-squares H estimate on all of the inliers

18 Sufficient Number of Iterations (N)
How Many Iterations are needed ? We want to ensure that in probability p at least one of the samples of S points is free from outliers (for example p=99%) w – inliers probability What is N ?

19 Sufficient Number of Iterations (N)
p=0.99 w : inliers probability, : outlier probability Sample Size proportion of outliers e s 5% 10% 20% 25% 30% 40% 50% 2 3 5 6 7 11 17 4 9 19 35 13 34 72 12 26 57 146 16 24 37 97 293 8 20 33 54 163 588 44 78 272 1177 It is seen here that 2 points are the best choice as the required number of iterations is getting much larger as more samples are used in each step

20 Adaptively determining the number of Iterations
e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2 N=∞, sample_count =0 While N >sample_count repeat Choose a sample and count the number of inliers Set e=1-((number of inliers)/(total number of points)) Recompute N from e Increment the sample_count by 1 Terminate In many cases we do not know in advance the fraction of data outliers

21 RANSAC discussion Advantages:
– General method suited for a wide range of model fitting problems – Easy to implement and easy to calculate its failure rate • Disadvantages: – Only handles a moderate percentage of outliers without cost blowing up – Many real problems have high rate of outliers (but sometimes selective choice of random subsets can help) In the next method presented we will be able to deal with larger percentage of outliers

22 Hough Transform

23 Hough Transform Finding Shapes • Straight Lines • Rectangles • Circles
(proposed by Hough in ~1959) Finding Shapes • Straight Lines • Rectangles • Circles • Others More generally: A parametric voting

24 Applications Find signs in the images (rectangles, circles).
3D reconstruction of buildings (lines). Find the eye in a face image. Tracking More… The challenge: The image contains a lot of irrelevant information. There are partial occlusions. The image might be noisy

25 We will take the example of edge detection as main example for Hough transform application.
The initial result of edge detection may be in many times noisy and hard to analyze. We are looking for ‘global’ operation that will recognize the line, not based on neighborhood information

26 Edge Detection Review INPUT IMAGE Vertical [-1 0 1] 2) Edge
Enhancement 1) Noise Smoothing Horizontal [-1 0 1]T 3)Threshold EDGE IMAGE “GRADIENT” IMAGE

27 How do we detect lines from Edge Images?
Raw Image Edge Image Segmented Lanes

28 How do we detect lines from Edge Images?
Given an edge image, we would like to group sets of edge pixels to identify dominant lines in an image. One possibility is template matching in the edge image Generate hypothetical line “kernel” and convolve with the image Requires a large set of masks to accurately localize lines, which is computationally expensive We will instead perform an image transformation where edge pixels vote for lines Eventually reduces the problem to vote thresholding Each 2 points define a line Check for all other points: are they on the same line? Problem : n^2 lines, n^3 tests

29 The Parameters Space The line equation : y=ax+b (slope-intercept representation) Each line can be represented with 2 parameters: (a,b) If we define a 2D parameters space, each line is a point in this space. For each point on this line (x,y), there are infinite number of lines that goes through it in the parameters space: b=-xa+y

30 The Duality Property b y a x b y a x
A point is mapped to a line, and a line is mapped to a point. Hough Image b y a x Hough Image Take all edge features. Build a table for the parameters space. Each edge point will cast a line in parameters space – points in the parameters space with higher scores are lines in the image How can We use this duality? b y a x

31 Hough Transform: use a voting scheme
Apply an edge detector on the image (gradient + threshold, or using Canny’s method) Prepare a table h(a,b) for a,b in a certain range. Loop over the image: for each pixel (x,y) on an edge, vote for the cells (a,b) which satisfy the equation y = ax + b (meaning, increase the cell accumulator- h(a,b)++) Choose the cell with the maximal value (or the cells which pass a given threshold)

32 Example: Slope-Intercept Representation
Original image Edge Map Take two strongest lines Hough (before threshold)

33 Issues to think of… Representation: Vertical lines are not covered by the representation y=ax+b. Range: Selecting the range of values for the table (a and b are not bounded!) Quantization: choosing a grid. Thresholding: Similar lines will get similar scores.

34 A better representation
Use a polar representation instead: - The line normal d The distance from (0,0) Hough Image d y x

35 In the polar representation…
d Note the two intersection points in the parameters space: these are (d, theta) and (-d, pie + theta) that represent the same line with normal (d, theta).

36 Polar Representation Example
1. Original image + edges 3. Image + detected lines 2. Hough (before threshold)

37 Range and Quantization
In the polar representation it is easier to determine the range: The values of d and of  are now bounded ! The resolution can be application dependent. We have natural units (pixels for d, and degrees for ) Quantization: Option 1: Vote for the “nearest” cell (f.g – for each  take the nearest d). Option 2: Perform a weighted vote.

38 Smoothing b y a x b y a x Hough Image Without smoothing Hough Image
With smoothing b y a x

39 Making the Decision Q. How can we find the two most dominant lines in the image ? The problem: if (r,) is the cell with the maximal value, then (r+,  +) will get a high score too.

40 Making the Decision(cont’)
Solution 1: Search only for local maxima. Solution 2: “Back Mapping”- Perform a second vote, where each point votes only for the best line passing through it.

41 b y a x b y a x Using more image features.
Example – using the local gradient Hough Image not using gradient b y a x Hough Image using gradient b y a x

42 b y a x b y a x Using more image features.
Example – using the local gradient Hough Image not using gradient b y a x Hough Image using gradient b y a x

43 Detecting Complex Shapes – Circles
Naïve solution: Construct a table h(a,b,r). For each pixel (x,y) which is edge in the image, vote for all the cells satisfying: Problem: The size of the table is O(N3). The voting for each pixels takes: O(N2). Solution: Use the gradient information (Solve for the center (a,b) and then for r)

44 Circle Example With no orientation, each token (point) votes for all possible circles. With orientation, each token can vote for a smaller number of circles.

45 Finding Coins Original Edges (note noise)

46 Finding Coins (Continued)
Penny Quarters

47 Finding Coins (Continued)
Note that because the quarters and penny are different sizes, a different Hough transform (with separate accumulators) was used for each circle size. Coin finding sample images from: Vivik Kwatra


Download ppt "Exercise Class 11: Robust Tecuniques RANSAC, Hough Transform"

Similar presentations


Ads by Google