Presentation is loading. Please wait.

Presentation is loading. Please wait.

Interest Points EE/CSE 576 Linda Shapiro.

Similar presentations


Presentation on theme: "Interest Points EE/CSE 576 Linda Shapiro."— Presentation transcript:

1 Interest Points EE/CSE 576 Linda Shapiro

2 Preview: Harris detector
Interest points extracted with Harris (~ 500 points)

3 How can we find corresponding points?

4 Not always easy NASA Mars Rover images

5 Answer below (look for tiny colored squares…)
NASA Mars Rover images Figure by Noah Snavely 5

6 Human eye movements What catches your interest? Your eyes don’t see everything at once, but they jump around. You see only about 2 degrees with high resolution Yarbus eye tracking 6

7 Interest points original Suppose you have to click on some point, go away and come back after I deform the image, and click on the same points again. Which points would you choose? deformed

8 Intuition

9 Corners We should easily recognize the point by looking through a small window Shifting a window in any direction should give a large change in intensity no change in all directions no change along the edge direction significant change in all directions Source: A. Efros 9

10 Let’s look at the gradient distributions
a. moving around causes no difference, b. moving up and down no difference, but left and right yes, c. moving in two directions causes difference

11 Principal Component Analysis
Principal component is the direction of highest variance. How to compute PCA components: Subtract off the mean for each data point. Compute the covariance matrix. Compute eigenvectors and eigenvalues. The components are the eigenvectors ranked by the eigenvalues. Next, highest component is the direction with highest variance orthogonal to the previous components.

12 Corners have … Both eigenvalues are large!

13 Second Moment Matrix or Harris Matrix
2 x 2 matrix of image derivatives smoothed by Gaussian weights. Notation: First compute Ix, Iy, and IxIy as 3 images; then apply Gaussian to each. OR, first apply the Gaussian and the compute the derivatives. 13

14 The math Compute the Harris matrix over a window.
To compute the eigenvalues: Compute the Harris matrix over a window. What does this equation mean in practice? Σsmoothed Ix Σsmoothed IxIy Σsmoothed IxIy Σsmoothed Iy2 2. Compute eigenvalues from that. Typically Gaussian weights

15 Corner Response Function
Computing eigenvalues are expensive Harris corner detector used the following alternative Reminder:

16 Harris detector: Steps
Compute derivatives Ix, Iy and IxIy at each pixel and smooth them with a Gaussian. (Or smooth first and then derivatives.) Compute the Harris matrix H in a window around each pixel Compute corner response function R Threshold R Find local maxima of response function (nonmaximum suppression) C.Harris and M.Stephens. Proceedings of the 4th Alvey Vision Conference: pages 147—151, 1988.  16

17 Harris Detector: Steps
17

18 Harris Detector: Steps
Compute corner response R Shown as a heat map 18

19 Harris Detector: Steps
Find points with large corner response: R > threshold 19

20 Harris Detector: Steps
Take only the points of local maxima of R 20

21 Harris Detector: Results
21

22 Simpler Response Function
Instead of We can use

23 Properties of the Harris corner detector
Translation invariant? Rotation invariant? Scale invariant? Yes Yes What’s the problem? No Corner ! All points will be classified as edges

24 Scale Let’s look at scale first: What is the “best” scale?

25 Scale Invariance How can we independently select interest points in each image, such that the detections are repeatable across different scales? K. Grauman, B. Leibe

26 Differences between Inside and Outside
Laplacian 1. We can use a Laplacian function

27 Scale But we use a Gaussian. Why Gaussian?
It is invariant to scale change, i.e., and has several other nice properties. Lindeberg, 1994 In practice, the Laplacian is approximated using a Difference of Gaussian (DoG).

28 Difference-of-Gaussian (DoG)
G G = DoG = - K. Grauman, B. Leibe

29 DoG example Take Gaussians at multiple spreads and uses DoGs. σ = 1
σ = 66

30 Scale invariant interest points
Interest points are local maxima in both position and scale. Look for extrema in difference of Gaussians. s5 s4 scale s3 s2 Apply Gaussians with different σ’s. (x, y, σ) s1 30

31 Scale In practice the image is downsampled for larger sigmas.
Lowe, 2004.

32 Lowe’s Pyramid Scheme s+2 filters s+1=2(s+1)/s0 . i=2i/s0 2=22/s0 1=21/s0 0 s+2 differ- ence images s+3 images including original The parameter s determines the number of images per octave.

33 Key point localization
s+2 difference images. top and bottom ignored. s planes searched. Detect maxima and minima of difference-of-Gaussian in scale space Each point is compared to its 8 neighbors in the current image and 9 neighbors each in the scales above and below For each max or min found, output is the location and the scale.

34 Sampling in scale for efficiency
Scale-space extrema detection: experimental results over 32 images that were synthetically transformed and noise added. % detected % correctly matched average no. detected average no. matched Stability Expense Sampling in scale for efficiency How many scales should be used per octave? S=? More scales evaluated, more keypoints found S < 3, stable keypoints increased too S > 3, stable keypoints decreased S = 3, maximum stable keypoints found We could not evaluate the

35 Results: Difference-of-Gaussian
K. Grauman, B. Leibe

36 How can we find correspondences?
Similarity transform

37 Rotation invariance CSE 576: Computer Vision Rotate patch according to its dominant gradient orientation This puts the patches into a canonical orientation. Image from Matthew Brown 37

38 Orientation Normalization
Compute orientation histogram Select dominant orientation Normalize: rotate to fixed orientation [Lowe, SIFT, 1999] p 2 T. Tuytelaars, B. Leibe

39 What’s next? Once we have found the keypoints and a dominant orientation for each, we need to describe the (rotated and scaled) neighborhood about each. 128-dimensional vector

40 But there are TWO parts to SIFT. an interest point detector
Important Point People just say “SIFT”. But there are TWO parts to SIFT. an interest point detector a region descriptor They are independent. Many people use the region descriptor without looking for the points.


Download ppt "Interest Points EE/CSE 576 Linda Shapiro."

Similar presentations


Ads by Google