Presentation is loading. Please wait.

Presentation is loading. Please wait.

Interest Point Detectors (see CS485/685 notes for more details) CS491Y/691Y Topics in Computer Vision Dr. George Bebis.

Similar presentations


Presentation on theme: "Interest Point Detectors (see CS485/685 notes for more details) CS491Y/691Y Topics in Computer Vision Dr. George Bebis."— Presentation transcript:

1 Interest Point Detectors (see CS485/685 notes for more details) CS491Y/691Y Topics in Computer Vision Dr. George Bebis

2 What is an Interest Point? A point in an image which has a well-defined position and can be robustly detected. Typically associated with a significant change of one or more image properties simultaneously (e.g., intensity, color, texture).

3 What is an Interest Point? (cont’d) Corners is a special case of interest points. However, interest points could be more generic than corners.

4 Why are interest points useful? Could be used to find corresponding points between images which is very useful for numerous applications! stereo matching panorama stitching left cameraright camera

5 How to find corresponding points? Need to define local patches surrounding the interest points and extract feature descriptors from every patch. Match feature descriptors to find corresponding points. ( ) = ? feature descriptor ?

6 Properties of good features Local:Local: robust to occlusion and clutter. Accurate:Accurate: precise localization. CovariantCovariant Robust:Robust: noise, blur, compression, etc. Efficient:Efficient: close to real-time performance. Repeatable

7 Interest point detectors should be covariant Features should be detected in corresponding locations despite geometric or photometric changes.

8 ( ) = ? feature descriptor ? Interest point descriptors should be invariant Should be similar despite geometric or photometric transformations

9 Interest point candidates Use features with gradients in at least two, significantly different orientations (e.g., corners, junctions etc)

10 Harris Corner Detector A W (x,y) is a 2 x 2 matrix called auto-correlation matrix. f x, f y are the horizontal and vertical derivatives. w(x,y) is a smoothing function (e.g., Gaussian) C. Harris and M. Stephens. "A Combined Corner and Edge Detector”, Proceedings of the 4th Alvey Vision Conference: pages 147—151, Assuming a W x W window, it computes the matrix:

11 Why is the auto-correlation matrix useful? Describes the gradient distribution (i.e., local structure) inside the window!

12 Properties of the auto-correlation matrix A w is symmetric and can be decomposed : ( max ) 1/2 ( min ) 1/2 We can visualize A W as an ellipse with axis lengths and directions determined by its eigenvalues and eigenvectors.

13 Harris Corner Detector (cont’d) The eigenvectors of A W encode direction of intensity change. The eigenvalues of A W encode strength of intensity change. v1 v1 v2v2 ( max ) 1/2 ( min ) 1/2 direction of the fastest change direction of the slowest change

14 Harris Corner Detector (cont’d) 1 2 “Corner” 1 and 2 are large, 1 ~ 2 ; intensity changes in all directions 1 and 2 are small; intensity is almost constant in all directions “Edge” 1 >> 2 “Edge” 2 >> 1 “Flat” region Classification of pixels using the eigenvalues of A W :

15 Harris Corner Detector (cont’d) To avoid computing the eigenvalues explicitly, the Harris detector uses the following function: R(A W ) = det(A W ) – α trace 2 (A W ) which is equal to: R(A W ) = λ 1 λ 2 - α (λ 1 + λ 2 ) 2 α: is a const

16 Harris Corner Detector (cont’d) “Corner” R > 0 “Edge” R < 0 “Flat” region |R| small α: is usually between 0.04 and 0.06 R(A W ) = det(A W ) – α trace 2 (A W ) Classification of image points using R(A W ):

17 Harris Corner Detector (cont’d) Other functions:

18 Harris Corner Detector - Steps σ D is called the “differentiation” scale 1.Compute the horizontal and vertical (Gaussian) derivatives 2. Compute the three images involved in A W :

19 Harris Detector - Steps R(A W ) = det(A W ) – α trace 2 (A W ) 3. Convolve each of the three images with a larger Gaussian w(x,y) : Gaussian σ I is called the “integration” scale 4. Determine cornerness: 5. Find local maxima

20 Harris Corenr Detector - Example

21 Harris Detector - Example

22 Compute corner response R

23 Find points with large corner response: R>threshold

24 Take only the points of local maxima of R

25 Map corners on the original image (for visualization)

26 Harris Corner Detector (cont’d) Rotation invariant Sensitive to: Scale change Significant viewpoint change Significant contrast change

27 Multi-scale Harris Detector scale x y  Harris  Detect interest points at varying scales. R(A W ) = det(A W (x,y,σ I,σ D )) – α trace 2 (A W (x,y,σ I,σ D )) σnσnσnσn σ D = σ n σ I =γσ D σ n =k n σ

28 Multi-scale Harris Detector (cont’d) The same interest point will be detected at multiple consecutive scales. Interest point location will shift as scale increases (i.e., due to smoothing). i.e., the size of each circle corresponds to the scale at which the interest point was detected.

29 How do we match them? Corresponding features might appear at different scales. How do we determine these scales? We need a scale selection mechanism!

30 Exhaustive Search Simple approach for scale selection but not efficient!

31 Characteristic Scale Find the characteristic scale of each feature (i.e., the scale revealing the spatial extent of an interest point). characteristic scale

32 Characteristic Scale (cont’d) Only a subset of interest points are selected using the characteristic scale of each feature. i.e., the size of the circles is related to the scale at which the interest points were selected. Matching can be simplified!

33 Automatic Scale Selection – Main Idea F(x,σ n )Design a function F(x,σ n ) which provides some local measure. F(x,σ n )σ nSelect points at which F(x,σ n ) is maximal over σ n. T. Lindeberg, "Feature detection with automatic scale selection" International Journal of Computer Vision, vol. 30, no. 2, pp , F(x,σ n ) max of F(x,σ n ) corresponds to characteristic scale! σnσnσnσn F(x,σ n )

34 Slide from Tinne Tuytelaars Lindeberg et al, 1996

35

36

37

38

39

40

41

42 Automatic Scale Selection (cont’d) Using characteristic scale, the spatial extent of interest points becomes covariant to scale transformations. The ratio σ 1 /σ 2 reveals the scale factor between the images. σ 1 /σ 2 = 2.5 σ1σ1 σ2σ2

43 F(x,σ n ) How to choose F(x,σ n ) ? F(x,σ n )Typically, F(x,σ n ) is defined using derivatives, e.g.: LoG (Laplacian of Gaussian) yielded best results in a recent evaluation study. DoG (Difference of Gaussians) was second best. C. Schmid, R. Mohr, and C. Bauckhage, "Evaluation of Interest Point Detectors", International Journal of Computer Vision, 37(2), pp , 2000.

44 LoG and DoG LoG can be approximated by DoG:

45 σnσnσnσn Harris-Laplace Detector scale x y  Harris   LoG  Multi-scale Harris with scale selection. Uses LoG maxima to find characteristic scale.

46 Harris-Laplace Detector (cont’d) (1)Find interest points at multiple scales using Harris detector. - Scales are chosen as follows: σ n =k n σ - At each scale, choose local maxima assuming 3 x 3 window where (σ D =σ n, σ I =γσ D ) (i.e., non-maximal suppression)

47 Harris-Laplace Detector (cont’d) (2) Select points at which the normalized LoG is maximal across scales and the maximum is above a threshold. K. Mikolajczyk and C. Schmid, “Indexing based on scale invariant interest points" IEEE Int. Conference on Computer Vision, pp , where: σnσn σ n+1 σ n-1

48 Example Interest points detected at each scale using Harris-Laplace –Few correspondences between levels corresponding to same σ. –More correspondences between levels having a ratio of σ = 2. images differ by a scale factor of 1.92 σ=1.2 σ=2.4σ=4.8σ=9.6 σ=1.2 σ=2.4 σ=4.8σ=9.6

49 Example (cont’d) -More than 2000 points would have been detected without scale selection. -Using scale selection, 190 and 213 points were detected in the left and right images, respectively. (same viewpoint – change in focal length and orientation)

50 Example (cont’d) 58 points are initially matched (some were not correct)

51 Example (cont’d) Reject outliers (i.e., inconsistent matches) using RANSAC Left with 32 matches, all of which are correct. Estimated scale factor is 4:9 and rotation angle is 19 degrees.

52 Harris-Laplace Detector (cont’d) Invariant to: –Scale –Rotation –Translation Robust to: –Illumination changes –Limited viewpoint changes Repeatability

53 Harris-Laplace using DoG Look for local maxima in DoG pyramid DoG pyramid David G. Lowe, "Distinctive image features from scale-invariant keypoints.” Int. Journal of Computer Vision, 60 (2), pp , 2004.

54 Handling Affine Changes Similarity transformations cannot account for perspective distortions; affine could be used for planar surfaces. Similarity transform Affine transform

55 Harris Affine Detector Use an iterative approach: –Extract approximate locations and scales using the Harris- Laplace detector. –For each point, modify the scale and shape of its neighborhood in an iterative fashion. –Converges to stable points that are covariant to affine transformations..

56 Steps of Iterative Affine Adaptation 1. Detect initial locations and neighborhood using Harris- Laplace. 2. Estimate affine shape of neighborhood using 2 nd order moment matrix μ(x, σ I, σ D ).

57 Steps of Iterative Affine Adaptation (cont’d) 3. Normalize (i.e., de-skew) the affine region by mapping it to a circular one (i.e., “remove” perspective distortions). 4. Re-detect the new location and scale in the normalized image. 5. Go to step 2 if the eigenvalues of μ(x, σ I, σ D ) for the new point are not equal (i.e., not yet adapted to the characteristic shape).

58 Iterative affine adaptation - Examples Initial points Example 1 Example 2

59 Iterative affine adaptation – Examples (cont’d) Iteration #1 Example 1 Example 2

60 Iterative affine adaptation – Examples (cont’d) Iteration #2 Example 1 Example 2

61 Iterative affine adaptation – Examples (cont’d) Iteration #3, #4, … K. Mikolajczyk and C. Schmid, “Scale and Affine invariant interest point detectors”, International Journal of Computer Vision, 60(1), pp , Example 1 Example 2

62 Iterative affine adaptation – Examples (cont’d)

63 De-skewing Consider a point x L with 2 nd order matrix M L The de-skewing transformation is defined as follows: xLxL

64 De-skewing corresponding regions Consider two points x L and x R which are related through an affine transformation: xLxL xRxR

65 De-skewing corresponding regions (cont’d) Normalized regions are related by pure rotation R.

66 Resolving orientation ambiguity Create histogram of local gradient directions in the patch. Smooth histogram and assign canonical orientation at peak of smoothed histogram. Dominant gradient direction! 0 2  (36 bins)

67 Resolving orientation ambiguity (cont’d) Resolve orientation ambiguity. Compute R

68 Other Affine Invariant Blob/Region Detectors There are many other techniques for detecting affine invariant blobs or regions, for example: –Intensity Extrema-Based Region (IER) –Maximally Stable Extremal Regions (MSERs) No need to detect interest points.

69 Intensity Extrema-Based Region Detector (1)Take a local intensity extremum as initial point. (2) Examine the intensity along “rays” from the local extremum. (3) The point on each ray for which f I (t) reaches an extremum is selected (i.e., invariant). (4) Linking these points together yields an affinely invariant region, to which an ellipse is fitted. Tuytelaars, T. and Van Gool, L. “Matching Widely Separated Views based on Affine Invariant Regions”, International Journal on Computer Vision, 59(1):61–85, d>0: small const

70 Intensity Extrema-Based Region Detector (cont’d) The regions found may not exactly correspond, so we approximate them with ellipses using 2 nd order moments.

71 Intensity Extrema-Based Region Detector (cont’d) Double the size of the ellipse to make regions more distinctive. Final ellipse is not necessarily centered at original anchor point.

72 Intensity Extrema-Based Region Detector (cont’d)

73 Region extraction is fairly robust to inaccurate localization of intensity extremum.

74 Maximally Stable Extremal Regions (MSERs) Consider all possible thresholdings of a gray-scale image: If I(x,y) > I t then I(x,y)=255; else I(x,y)=0 Matas, J., Chum, O., Urban, M. and Pajdla, T., “Robust wide-baseline stereo from maximally stable extremal regions”, British Machine Vision Conf, pp. 384–393, 2002.

75 Maximally Stable Extremal Regions (MSERs) Extermal Regions –All regions formed using different thresholds. –Can be extracted using connected components. Maximally Stable Extremal Regions –Regions that remain stable over a large range of thresholds. Approximate MSER by an ellipse

76 Example Extermal regions can be extracted in O(nlog(logn)) time where n is the number of pixels. Sensitive to image blur.


Download ppt "Interest Point Detectors (see CS485/685 notes for more details) CS491Y/691Y Topics in Computer Vision Dr. George Bebis."

Similar presentations


Ads by Google