Presentation on theme: "Interest Point Detectors (see CS485/685 notes for more details)"— Presentation transcript:
1Interest Point Detectors (see CS485/685 notes for more details) CS491Y/691Y Topics in Computer VisionDr. George Bebis
2What is an Interest Point? A point in an image which has a well-defined position and can be robustly detected.Typically associated with a significant change of one or more image properties simultaneously (e.g., intensity, color, texture).2
3What is an Interest Point? (cont’d) Corners is a special case of interest points.However, interest points could be more generic than corners.3
4Why are interest points useful? Could be used to find corresponding points between images which is very useful for numerous applications!panorama stitchingstereo matchingleft cameraright camera
5How to find corresponding points? ( )=?feature descriptorNeed to define local patches surrounding the interest points and extract feature descriptors from every patch.Match feature descriptors to find corresponding points.5
6Properties of good features Local: robust to occlusion and clutter.Accurate: precise localization.CovariantRobust: noise, blur, compression, etc.Efficient: close to real-time performance.Repeatable
7Interest point detectors should be covariant Features should be detected in corresponding locations despite geometric or photometric changes.7
8Interest point descriptors should be invariant ( )=?feature descriptorShould be similar despite geometricor photometric transformations88
9Interest point candidates Use features with gradients in at least two, significantlydifferent orientations (e.g., corners, junctions etc)
10Harris Corner Detector Assuming a W x W window, it computes the matrix:AW(x,y) is a 2 x 2 matrix called auto-correlation matrix.fx, fy are the horizontal and vertical derivatives.w(x,y) is a smoothing function (e.g., Gaussian)C. Harris and M. Stephens. "A Combined Corner and Edge Detector”, Proceedings of the 4th Alvey Vision Conference: pages 147—151, 1988.
11Why is the auto-correlation matrix useful? Describes the gradient distribution(i.e., local structure) inside thewindow!
12Properties of the auto-correlation matrix Aw is symmetric and can be decomposed :We can visualize AW as an ellipse with axis lengths and directions determined by its eigenvalues and eigenvectors.(max)1/2(min)1/212
13Harris Corner Detector (cont’d) The eigenvectors of AW encode direction of intensity change.The eigenvalues of AW encode strength of intensity change.v1v2direction of the slowest change(max)1/2(min)1/2direction of the fastest change1313
14Harris Corner Detector (cont’d) 2“Edge” 2 >> 1“Corner” 1 and 2 are large, 1 ~ 2; intensity changes in all directionsClassification of pixels using the eigenvalues of AW :1 and 2 are small; intensity is almost constant in all directions“Edge” 1 >> 2“Flat” region114
15Harris Corner Detector (cont’d) To avoid computing the eigenvalues explicitly, the Harris detector uses the following function:R(AW) = det(AW) – α trace2(AW)which is equal to:R(AW) = λ1 λ2- α (λ1+ λ2)2α: is a const
16Harris Corner Detector (cont’d) “Corner” R > 0“Edge” R < 0“Flat” region|R| smallClassification of imagepoints using R(AW):R(AW) = det(AW) – α trace2(AW)α: is usually between0.04 and 0.0616
17Harris Corner Detector (cont’d) Other functions:
18Harris Corner Detector - Steps Compute the horizontal and vertical (Gaussian) derivativesσD is called the “differentiation” scale2. Compute the three images involved in AW :
19Harris Detector - Steps 3. Convolve each of the three images with a larger Gaussianw(x,y) :GaussianσI is called the “integration” scale4. Determine cornerness:R(AW) = det(AW) – α trace2(AW)5. Find local maxima
28Multi-scale Harris Detector (cont’d) The same interest point will be detected at multiple consecutive scales.Interest point location will shift as scale increases (i.e., due to smoothing).i.e., the size of each circle corresponds to the scale at which the interest pointwas detected.
29How do we match them?Corresponding features might appear at different scales.How do we determine these scales?We need a scale selection mechanism!
30Exhaustive SearchSimple approach for scale selection but not efficient!
31Characteristic ScaleFind the characteristic scale of each feature (i.e., the scale revealing the spatial extent of an interest point).characteristic scalecharacteristic scale31
32Characteristic Scale (cont’d) Only a subset of interest points are selected using the characteristic scale of each feature.i.e., the size of the circles is related to the scale at which the interest points were selected.Matching can be simplified!
33Automatic Scale Selection – Main Idea Design a function F(x,σn) which provides some local measure.Select points at which F(x,σn) is maximal over σn.max of F(x,σn)corresponds tocharacteristic scale!F(x,σn)σnT. Lindeberg, "Feature detection with automatic scale selection" International Journal of Computer Vision, vol. 30, no. 2, pp , 1998.
34Lindeberg et al, 1996Slide from Tinne Tuytelaars
42Automatic Scale Selection (cont’d) Using characteristic scale, the spatial extent of interest points becomes covariant to scale transformations.The ratio σ1/σ2 reveals the scale factor between the images.σ1σ2σ1/σ2 = 2.542
43How to choose F(x,σn) ?Typically, F(x,σn) is defined using derivatives, e.g.:LoG (Laplacian of Gaussian) yielded best results in a recent evaluation study.DoG (Difference of Gaussians) was second best.C. Schmid, R. Mohr, and C. Bauckhage, "Evaluation of Interest Point Detectors",International Journal of Computer Vision, 37(2), pp , 2000.
45Harris-Laplace Detector Multi-scale Harris with scale selection.Uses LoG maxima to find characteristic scale.σnscale LoG y Harris x
46Harris-Laplace Detector (cont’d) Find interest points at multiple scales using Harris detector.- Scales are chosen as follows: σn =knσ- At each scale, choose local maxima assuming 3 x 3 window(i.e., non-maximal suppression)where(σD =σn, σI =γσD )
47Harris-Laplace Detector (cont’d) (2) Select points at which the normalized LoG is maximal across scales and the maximum is above a threshold.σn+1σnwhere:σn-1K. Mikolajczyk and C. Schmid, “Indexing based on scale invariant interest points" IEEE Int. Conference on Computer Vision, pp , 2001.
48Example Interest points detected at each scale using Harris-Laplace Few correspondences between levels corresponding to same σ.More correspondences between levels having a ratio of σ = 2.images differ by a scale factor of 1.92σ=1.2σ=2.4σ=4.8σ=9.6σ=1.2σ=4.8σ=9.6σ=2.4
49Example (cont’d)(same viewpoint – change in focal length and orientation)More than 2000 points would have been detected without scale selection.Using scale selection, 190 and 213 points were detected in the left and right images, respectively.
50Example (cont’d)58 points are initially matched (some were not correct)
51Example (cont’d)Reject outliers (i.e., inconsistent matches) using RANSACLeft with 32 matches, all of which are correct.Estimated scale factor is 4:9 and rotation angle is 19 degrees.
53Harris-Laplace using DoG Look for local maximain DoG pyramidDoG pyramidDavid G. Lowe, "Distinctive image features from scale-invariant keypoints.” Int. Journal of Computer Vision, 60 (2), pp , 2004.
54Handling Affine Changes Similarity transformations cannot account for perspective distortions; affine could be used for planar surfaces.Similarity transformAffine transform
55Harris Affine Detector Use an iterative approach:Extract approximate locations and scales using the Harris-Laplace detector.For each point, modify the scale and shape of its neighborhood in an iterative fashion.Converges to stable points that are covariant to affine transformations..
56Steps of Iterative Affine Adaptation 1. Detect initial locations and neighborhood using Harris-Laplace.2. Estimate affine shape of neighborhood using 2nd order moment matrix μ(x, σI, σD).
57Steps of Iterative Affine Adaptation (cont’d) 3. Normalize (i.e., de-skew) the affine region by mapping it to a circular one (i.e., “remove” perspective distortions).4. Re-detect the new location and scale in the normalized image.5. Go to step 2 if the eigenvalues of μ(x, σI, σD) for the new point are not equal (i.e., not yet adapted to the characteristic shape).
58Iterative affine adaptation - Examples Initial pointsExample Example 2
59Iterative affine adaptation – Examples (cont’d) Iteration #1Example Example 2
60Iterative affine adaptation – Examples (cont’d) Iteration #2Example Example 2
61Iterative affine adaptation – Examples (cont’d) Iteration #3, #4, …Example Example 2K. Mikolajczyk and C. Schmid, “Scale and Affine invariant interest point detectors”,International Journal of Computer Vision, 60(1), pp , 2004.
63De-skewing Consider a point xL with 2nd order matrix ML The de-skewing transformation is defined as follows:xL63
64De-skewing corresponding regions Consider two points xL and xR which are related through an affine transformation:xLxR
65De-skewing corresponding regions (cont’d) Normalized regionsare related by purerotation R.
66Resolving orientation ambiguity Create histogram of local gradient directions in the patch.Smooth histogram and assign canonical orientation at peak of smoothed histogram.Dominant gradientdirection!2p(36 bins)66
68Other Affine Invariant Blob/Region Detectors There are many other techniques for detecting affine invariant blobs or regions, for example:Intensity Extrema-Based Region (IER)Maximally Stable Extremal Regions (MSERs)No need to detect interest points.
69Intensity Extrema-Based Region Detector Take a local intensity extremum as initial point.(2) Examine the intensity along “rays” from the local extremum.(3) The point on each ray for which fI(t) reaches an extremum is selected (i.e., invariant).(4) Linking these points together yields an affinely invariant region, to which an ellipse is fitted.d>0: small constTuytelaars, T. and Van Gool, L. “Matching Widely Separated Views based on Affine Invariant Regions”, International Journal on Computer Vision, 59(1):61–85, 2004.
70Intensity Extrema-Based Region Detector (cont’d) The regions found may not exactly correspond, so we approximate them with ellipses using 2nd order moments.
71Intensity Extrema-Based Region Detector (cont’d) Double the size of the ellipse to make regions more distinctive.Final ellipse is not necessarily centered at original anchor point.
72Intensity Extrema-Based Region Detector (cont’d)
73Intensity Extrema-Based Region Detector (cont’d) Region extraction is fairly robust to inaccurate localization of intensity extremum.
74Maximally Stable Extremal Regions (MSERs) Consider all possible thresholdings of a gray-scale image:If I(x,y) > It then I(x,y)=255; else I(x,y)=0Matas, J., Chum, O., Urban, M. and Pajdla, T., “Robust wide-baseline stereo from maximally stable extremal regions”, British Machine Vision Conf, pp. 384–393, 2002.
75Maximally Stable Extremal Regions (MSERs) Extermal RegionsAll regions formed using different thresholds.Can be extracted using connected components.Maximally Stable Extremal RegionsRegions that remain stable over a large range of thresholds.Approximate MSER by an ellipse
76ExampleExtermal regions can be extracted in O(nlog(logn)) time where n is the number of pixels.Sensitive to image blur.