Presentation is loading. Please wait.

Presentation is loading. Please wait.

Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.

Similar presentations


Presentation on theme: "Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision."— Presentation transcript:

1 Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision

2 Why Are Interest Points Useful? For establishing corresponding points between images stereo matching panorama stitching left cameraright camera 2

3 ( ) = ? feature descriptor ? How to Find Corresponding Points? Need to define local patches surrounding the interest points and extract feature descriptors from every patch. Match feature descriptors to find corresponding points. 3

4 Properties of Good Features Local: features are local, so robust to occlusion and clutter (no prior segmentation!) Accurate: precise localization Invariant (or covariant) Robust: noise, blur, compression, etc. do not have a big impact on the feature Distinctive: individual features can be matched to a large database of objects Efficient: close to real-time performance Repeatable 4

5 Invariance / Covariance A function f is invariant under a transformation T if its value does not change when the transformation is applied to its argument: A function f is covariant when it changes in a way consistent with the transformation T: if f(x) = y then f(T(x))=T(f(x))=T(y) if f(x) = y then f(T(x))=y 5

6 Interest Point Detectors Should Be Covariant Features should be detected in corresponding locations despite geometric or photometric changes. 6

7 Interest Point Descriptors Should Be Invariant Should be similar despite geometric or photometric transformations ( ) = ? feature descriptor ? 7

8 Interest Point Candidates Use features with gradients in at least two, significantly different orientations (e.g., corners, junctions etc.) 8

9 Aperture Problem A point on a line is hard to matchA corner is easier to match t t+1 9

10 auto-correlation Interest Point Candidates 10

11 Steps in Corner Detection 1.For each pixel, the corner operator is applied to obtain a cornerness measure for this pixel. 2.Threshold cornerness map to eliminate weak corners. 3.Apply non-maximal suppression to eliminate points whose cornerness measure is not larger than the cornerness values of all points within a certain distance. 11

12 Steps in Corner Detection 12

13 Corner Detection Methods Contour based −Extract contours and search for maximal curvature or inflexion points (i.e., curvature zero-crossing) along the contour. Intensity based −Compute a measure that indicates the presence of an interest point −(1) directly from gray (or color) values, or −(2) by first fitting a parametric model to the gray (or color) values. −Methods using parametric models can localize corners to sub-pixel accuracy but are more expensive. 13

14 Curvature Scale Space Curvature: Parametric contour representation: (x(t), y(t)) A contour-based approach: curvature scale space 14

15 Curvature: g(t,σ): Gaussian Curvature Scale Space 15

16 G. Bebis, G. Papadourakis and S. Orphanoudakis, "Curvature Scale Space Driven Object Recognition with an Indexing Scheme based on Artificial Neural Networks", Pattern Recognition, Vol. 32, No. 7, pp. 1175-1201, 1999. σ Curvature Scale Space 16

17 Zuniga-Haralick Detector Approximate image function in the neighborhood of the pixel (i,j) by a bi-cubic polynomial. measure of "cornerness”: 17

18 Corner Detection Using Edge Detection? Edge detectors are not stable at corners. Gradient is ambiguous at corner tip (i.e., discontinuous near corners). 18

19 Corner Detection Using Intensity Image gradient has two or more dominant directions near a corner. Shifting a window in any direction should give a large change in intensity. “edge”: no change along the edge direction “corner”: significant change in all directions “flat” region: no change in all directions 19

20 Moravec Detector Idea: measure intensity variation at (x,y) by shifting a small window (3x3 or 5x5) by one pixel in each of the eight principal directions (horizontally, vertically, and four diagonals). 20

21 The intensity variation S w in a given direction (∆x, ∆y) can be calculated as follows: Moravec Detector 21

22 Moravec’s detector calculates S w in all 8 directions: ∆x, ∆y in {-1,0,1} S W (-1,-1), S W (-1,0),..., S W (1,1) Moravec Detector 22

23 The “cornerness” of a pixel is the minimum intensity variation found over the eight shift directions: Cornerness(x,y) = min{S W (-1,-1), S W (-1,0),...S W (1,1)} Cornerness Map (normalized) Note response to isolated points! Moravec Detector 23

24 Non-maximal suppression will yield final corners Moravec Detector 24

25 Moravec Detector Does a reasonable job in finding the majority of true corners. Edge points not in one of the eight principal directions will be assigned a relatively large cornerness value. 25

26 The response is anisotropic as the intensity variation is only calculated at a discrete set of directions and shifts (i.e., not rotationally invariant) Moravec Detector 26

27 Harris Detector Improves the Moravec detector by: (1)Avoiding the use of discrete directions and discrete shifts (2)Using a Gaussian window instead of a square window Gaussian1 in window, 0 outside 27

28 Harris Detector Idea: decompose S W by factoring out ∆x, ∆y Use Taylor series expansion to “linearize” 28

29 Taylor Series Expansion –Review A representation of a function f(x) as an infinite sum of terms calculated from the values of its derivatives at a single point a: 1D: 29

30 Taylor Series Expansion – Example The exponential function (in blue), and the sum of the first n+1 terms of its Taylor series at a=0 (in red). 30

31 Taylor Series Expansion –Review Can be generalized to higher dimensions: Gradient Hessian 2D: Hessian in n dimensions 31

32 Harris Detector Using first order Taylor approximation: substitute 32

33 Harris Detector SWSW rewrite using dot product 33

34 Harris Detector Since : 34

35 Harris Detector A W (x,y) is a 2 x 2 matrix called auto-correlation or 2 nd order moment matrix A W (x,y)= or So: where 35

36 Harris Detector w(x,y) : 1 in window, 0 outside Using a window function w(x,y), we rewrite S W : 36

37 Harris Detector Then A w becomes: 37

38 Harris Detector w(x,y) : Gaussian Harris uses a Gaussian window: w(x,y)=G(x,y,σ I ) where σ I is called the “integration” scale: 38

39 Properties of Auto-Correlation Matrix Does not depend on Describes the gradient distribution (i.e., local structure) inside the window! 39

40 A w is symmetric and can be decomposed: ( max ) 1/2 ( min ) 1/2 We can visualize A W as an ellipse with axis lengths and directions determined by its eigenvalues and eigenvectors. Properties of Auto-Correlation Matrix 40

41 Harris Detector The eigenvectors of A W encode direction of intensity change. The eigenvalues of A W encode strength of intensity change. v 1 v2v2 ( max ) 1/2 ( min ) 1/2 direction of the fastest change direction of the slowest change 41

42 Harris Detector 1 2 “Corner” 1 and 2 are large, 1 ~ 2 ; intensity changes in all directions 1 and 2 are small; S W is almost constant in all directions “Edge” 1 >> 2 “Edge” 2 >> 1 “Flat” region Classification of pixels using the eigenvalues of A W : 42

43 Distribution of f x and f y fxfx fxfx fxfx fyfy fyfy fyfy 43

44 Harris Detector 2 % One way to compute “good” corners is by thresholding min(λ 1,λ 2 ) assume 1 > 2 J. Shi and C. Tomasi, "Good Features to Track”, 9th IEEE Conference on Computer Vision and Pattern Recognition, June 1994. 44


Download ppt "Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision."

Similar presentations


Ads by Google