Presentation is loading. Please wait.

Presentation is loading. Please wait.

Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.

Similar presentations


Presentation on theme: "Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision."— Presentation transcript:

1 Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision

2 Matching SIFT Features Given a feature in I 1, how to find the best match in I 2 ? 1.Define distance function that compares two descriptors 2.Test all the features in I 2, find the one with min. distance I1I1 I2I2 2

3 Matching SIFT Features I1I1 I2I2 f1f1 f2f2 What distance function should we use? −Use (sum of square differences) −Can give good scores to very ambiguous (bad) matches 3

4 Accept a match if SSD(f 1,f 2 ) < t How do we choose t? Matching SIFT Features 4

5 A better distance measure is the following: −SSD(f 1, f 2 ) / SSD(f 1, f 2 ’) −f 2 is best SSD match to f 1 in I 2 −f 2 ’ is 2 nd best SSD match to f 1 in I 2 I1I1 I2I2 f1f1 f2f2 f2'f2' 5

6 Matching SIFT Features Accept a match if SSD(f 1, f 2 ) / SSD(f 1, f 2 ’) < t −t = 0.8 has given good results in object recognition −90% of false matches were eliminated −Less than 5% of correct matches were discarded 6

7 Matching SIFT Features How to evaluate the performance of a feature matcher? 50 75 200 7

8 Matching SIFT Features True positives (TP) = # of detected matches that are correct False positives (FP) = # of detected matches that are incorrect 50 75 200 false match true match Threshold t affects # of correct/false matches 8

9 Matching SIFT Features 1 0.7 0 1 FP rate TP rate 0.1 ROC Curve - Generated by computing (FP rate, TP rate) for different thresholds - Maximize area under the curve (AUC) http://en.wikipedia.org/wiki/Receiver_operating_characteristic 9

10 Applications of SIFT Object recognition Object categorization Location recognition Robot localization Image retrieval Image panoramas 10

11 Object Recognition Object Models 11

12 Object Categorization 12

13 Location Recognition 13

14 Robot Localization 14

15 Map Continuously Built over Time 15

16 Image Retrieval – Example 1 … > 5000 images change in viewing angle 16

17 Matches 22 correct matches 17

18 Image Retrieval – Example 2 … > 5000 images change in viewing angle + scale 18

19 33 correct matches Matches 19

20 Image Panoramas from Unordered Image Set 20

21 Variations of SIFT features PCA-SIFT SURF GLOH 21

22 SIFT Steps – Review (1) Scale-space extrema detection −Extract scale and rotation invariant interest points (keypoints) (2) Keypoint localization −Determine location and scale for each interest point −Eliminate “weak” keypoints (3) Orientation assignment −Assign one or more orientations to each keypoint (4) Keypoint descriptor −Use local image gradients at the selected scale D. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 60(2):91-110, 2004. Cited 9589 times (as of 3/7/2011) 22

23 Steps 1-3 are the same; Step 4 is modified. Take a 41 x 41 patch at the given scale, centered at the keypoint, and normalized to a canonical direction PCA-SIFT Yan Ke and Rahul Sukthankar, “PCA-SIFT: A More Distinctive Representation for Local Image Descriptors”, Computer Vision and Pattern Recognition, 2004. 23

24 Instead of using weighted histograms, concatenate the horizontal and vertical gradients (39 x 39) into a long vector Normalize vector to unit length PCA-SIFT 2 x 39 x 39 = 3042 elements vector 24

25 Reduce the dimensionality of the vector using Principal Component Analysis (PCA) −e.g., from 3042 to 36 Sometimes, less discriminatory than SIFT PCA-SIFT PCA N x 1 K x 1 25

26 SURF: Speeded Up Robust Features Speed-up computations by fast approximation of (i) Hessian matrix and (ii) descriptor using “integral images” What is an “integral image”? Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, “SURF: Speeded Up Robust Features”, European Conference on Computer Vision (ECCV), 2006. 26

27 Integral Image The integral image I Σ (x,y) of an image I(x, y) represents the sum of all pixels in I(x,y) of a rectangular region formed by (0,0) and (x,y). Using integral images, it takes only four array references to calculate the sum of pixels over a rectangular region of any size. 27

28 Approximate L xx, L yy, and L xy using box filters Can be computed very fast using integral images! (box filters shown are 9 x 9 – good approximations for a Gaussian with σ=1.2) derivative approximation derivative Integral Image 28

29 In SIFT, images are repeatedly smoothed with a Gaussian and subsequently sub- sampled in order to achieve a higher level of the pyramid SURF: Speeded Up Robust Features 29

30 Alternatively, we can use filters of larger size on the original image Due to using integral images, filters of any size can be applied at exactly the same speed! (see Tuytelaars’ paper for details) SURF: Speeded Up Robust Features 30

31 Approximation of H: Using DoG Using box filters SURF: Speeded Up Robust Features 31

32 Instead of using a different measure for selecting the location and scale of interest points (e.g., Hessian and DOG in SIFT), SURF uses the determinant of to find both. Determinant elements must be weighted to obtain a good approximation: SURF: Speeded Up Robust Features 32

33 Once interest points have been localized both in space and scale, the next steps are: Orientation assignment Keypoint descriptor SURF: Speeded Up Robust Features 33

34 Orientation assignment Circular neighborhood of radius 6σ around the interest point (σ = the scale at which the point was detected) x response y response Can be computed very fast using integral images! 60 ° angle SURF: Speeded Up Robust Features 34

35 Sum the response over each sub-region for d x and d y separately To bring in information about the polarity of the intensity changes, extract the sum of absolute value of the responses too Feature vector size: 4 x 16 = 64 Keypoint descriptor (square region of size 20σ) 4 x 4 grid SURF: Speeded Up Robust Features 35

36 SURF-128 −The sum of d x and |d x | are computed separately for points where d y 0 −Similarly for the sum of d y and |d y | −More discriminatory! SURF: Speeded Up Robust Features 36

37 Has been reported to be 3 times faster than SIFT. Less robust to illumination and viewpoint changes compared to SIFT. K. Mikolajczyk and C. Schmid,"A Performance Evaluation of Local Descriptors", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615-1630, 2005. SURF: Speeded Up Robust Features 37

38 Gradient Location-Orientation Histogram (GLOH) Compute SIFT using a log-polar location grid: −3 cells in radial direction (i.e., radius 6, 11, and 15) −8 cells in angular direction (except for central one) Gradient orientation quantized in 16 bins Total: (2x8+1)*16 = 272 bins  PCA K. Mikolajczyk and C. Schmid, "A Performance Evaluation of Local Descriptors", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615-1630, 2005. 38

39 Shape Context A 3D histogram of edge point locations and orientations −Edges are extracted by the Canny edge detector. −Location is quantized into 9 bins (using a log-polar coordinate system). −Orientation is quantized in 4 bins (horizontal, vertical, and two diagonals). Total number of features: 4 x 9 = 36 K. Mikolajczyk and C. Schmid, "A Performance Evaluation of Local Descriptors", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615-1630, 2005. 39

40 Spin Image A histogram of quantized pixel locations and intensity values −A normalized histogram is computed for each of five rings centered on the region. −The intensity of a normalized patch is quantized into 10 bins. Total number of features: 5 x 10 = 50 K. Mikolajczyk and C. Schmid, "A Performance Evaluation of Local Descriptors", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615-1630, 2005. 40

41 Differential Invariants Example: some Gaussian derivatives up to fourth order “Local jets” of derivatives obtained by convolving the image with Gaussian derivates Derivates are computed at different orientations by rotating the image patches compute invariants 41

42 Bank of Filters (e.g., Gabor filters) 42

43 Moment Invariants Moments are computed for derivatives of an image patch using: where p and q is the order, a is the degree, and I d is the image gradient in direction d Derivatives are computed in x and y directions 43

44 Bank of Filters: Steerable Filters 44

45 Object Recognition Model-based object recognition Generic object recognition 45


Download ppt "Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision."

Similar presentations


Ads by Google