Download presentation

Published byDylan Wilcox Modified over 5 years ago

1
Image alignment Image from

2
A look into the past

3
**A look into the past Leningrad during the blockade**

4
**Bing streetside images**

5
**Image alignment: Applications**

Panorama stitching

6
**Image alignment: Applications**

Recognition of object instances

7
**Image alignment: Challenges**

Small degree of overlap Intensity changes Occlusion, clutter

8
**Feature-based alignment**

Search sets of feature matches that agree in terms of: Local appearance Geometric configuration ?

9
**Feature-based alignment: Overview**

Alignment as fitting Affine transformations Homographies Robust alignment Descriptor-based feature matching RANSAC Large-scale alignment Inverted indexing Vocabulary trees Application: searching the night sky

10
Alignment as fitting Previous lectures: fitting a model to features in one image M Find model M that minimizes xi

11
**Find transformation T that minimizes**

Alignment as fitting Previous lectures: fitting a model to features in one image Alignment: fitting a model to a transformation between pairs of features (matches) in two images M Find model M that minimizes xi T xi ' Find transformation T that minimizes

12
**2D transformation models**

Similarity (translation, scale, rotation) Affine Projective (homography)

13
**Let’s start with affine transformations**

Simple fitting procedure (linear least squares) Approximates viewpoint changes for roughly planar objects and roughly orthographic cameras Can be used to initialize fitting for more complex models

14
**Fitting an affine transformation**

Assume we know the correspondences, how do we get the transformation? Want to find M, t to minimize

15
**Fitting an affine transformation**

Assume we know the correspondences, how do we get the transformation?

16
**Fitting an affine transformation**

Linear system with six unknowns Each match gives us two linearly independent equations: need at least three to solve for the transformation parameters

17
**Fitting a plane projective transformation**

Homography: plane projective transformation (transformation taking a quad to another arbitrary quad)

18
**Homography The transformation between two views of a planar surface**

The transformation between images from two cameras that share the same center

19
**Application: Panorama stitching**

Source: Hartley & Zisserman

20
**Fitting a homography Recall: homogeneous coordinates**

Converting to homogeneous image coordinates Converting from homogeneous image coordinates

21
**Fitting a homography Recall: homogeneous coordinates**

Equation for homography: Converting to homogeneous image coordinates Converting from homogeneous image coordinates

22
**3 equations, only 2 linearly independent**

Fitting a homography Equation for homography: 3 equations, only 2 linearly independent

23
**Direct linear transform**

H has 8 degrees of freedom (9 parameters, but scale is arbitrary) One match gives us two linearly independent equations Homogeneous least squares: find h minimizing ||Ah||2 Eigenvector of ATA corresponding to smallest eigenvalue Four matches needed for a minimal solution

24
**Robust feature-based alignment**

So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align What if we don’t know the correspondences?

25
**Robust feature-based alignment**

So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align What if we don’t know the correspondences? ?

26
**Robust feature-based alignment**

27
**Robust feature-based alignment**

Extract features

28
**Robust feature-based alignment**

Extract features Compute putative matches

29
**Robust feature-based alignment**

Extract features Compute putative matches Loop: Hypothesize transformation T

30
**Robust feature-based alignment**

Extract features Compute putative matches Loop: Hypothesize transformation T Verify transformation (search for other matches consistent with T)

31
**Robust feature-based alignment**

Extract features Compute putative matches Loop: Hypothesize transformation T Verify transformation (search for other matches consistent with T)

32
**Generating putative correspondences**

?

33
**Generating putative correspondences**

? ( ) ? ( ) = feature descriptor feature descriptor Need to compare feature descriptors of local patches surrounding interest points

34
Feature descriptors Recall: feature detection and description

35
Feature descriptors Simplest descriptor: vector of raw intensity values How to compare two such vectors? Sum of squared differences (SSD) Not invariant to intensity change Normalized correlation Invariant to affine intensity change

36
**Disadvantage of intensity vectors as descriptors**

Small deformations can affect the matching score a lot

37
**Review: Alignment 2D alignment models Feature-based alignment outline**

Descriptor matching

38
**Feature descriptors: SIFT**

Descriptor computation: Divide patch into 4x4 sub-patches Compute histogram of gradient orientations (8 reference angles) inside each sub-patch Resulting descriptor: 4x4x8 = 128 dimensions David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp , 2004.

39
**Feature descriptors: SIFT**

Descriptor computation: Divide patch into 4x4 sub-patches Compute histogram of gradient orientations (8 reference angles) inside each sub-patch Resulting descriptor: 4x4x8 = 128 dimensions Advantage over raw vectors of pixel values Gradients less sensitive to illumination change Pooling of gradients over the sub-patches achieves robustness to small shifts, but still preserves some spatial information David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp , 2004.

40
Feature matching Generating putative matches: for each patch in one image, find a short list of patches in the other image that could match it based solely on appearance ?

41
**Problem: Ambiguous putative matches**

Source: Y. Furukawa

42
**Rejection of unreliable matches**

How can we tell which putative matches are more reliable? Heuristic: compare distance of nearest neighbor to that of second nearest neighbor Ratio of closest distance to second-closest distance will be high for features that are not distinctive Threshold of 0.8 provides good separation David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp , 2004.

43
RANSAC The set of putative matches contains a very high percentage of outliers RANSAC loop: Randomly select a seed group of matches Compute transformation from seed group Find inliers to this transformation If the number of inliers is sufficiently large, re-compute least-squares estimate of transformation on all of the inliers Keep the transformation with the largest number of inliers

44
**RANSAC example: Translation**

Putative matches

45
**RANSAC example: Translation**

Select one match, count inliers

46
**RANSAC example: Translation**

Select one match, count inliers

47
**RANSAC example: Translation**

Select translation with the most inliers

48
**Scalability: Alignment to large databases**

What if we need to align a test image with thousands or millions of images in a model database? Efficient putative match generation Approximate descriptor similarity search, inverted indices Test image ? Model database

49
**Large-scale visual search**

Reranking/ Geometric verification Inverted indexing Figure from: Kristen Grauman and Bastian Leibe, Visual Object Recognition, Synthesis Lectures on Artificial Intelligence and Machine Learning, April 2011, Vol. 5, No. 2, Pages 1-181

50
**Example indexing technique: Vocabulary trees**

Test image Database Vocabulary tree with inverted index D. Nistér and H. Stewénius, Scalable Recognition with a Vocabulary Tree, CVPR 2006

51
Goal: find a set of representative prototypes or cluster centers to which descriptors can be quantized Descriptor space

52
K-means clustering Want to minimize sum of squared Euclidean distances between points xi and their nearest cluster centers mk Algorithm: Randomly initialize K cluster centers Iterate until convergence: Assign each data point to the nearest center Recompute each cluster center as the mean of all points assigned to it

53
**K-means demo Source: http://shabal.in/visuals/kmeans/1.html**

Another demo:

54
**Recall: Visual codebook for generalized Hough transform**

… Appearance codebook Source: B. Leibe

55
**Hierarchical k-means clustering of descriptor space (vocabulary tree)**

We then run k-means on the descriptor space. In this setting, k defines what we call the branch-factor of the tree, which indicates how fast the tree branches. In this illustration, k is three. We then run k-means again, recursively on each of the resulting quantization cells. This defines the vocabulary tree, which is essentially a hierarchical set of cluster centers and their corresponding Voronoi regions. We typically use a branch-factor of 10 and six levels, resulting in a million leaf nodes. We lovingly call this the Mega-Voc. Hierarchical k-means clustering of descriptor space (vocabulary tree) Slide credit: D. Nister

56
**Vocabulary tree/inverted index**

We now have the vocabulary tree, and the online phase can begin. In order to add an image to the database, we perform feature extraction. Each descriptor vector is now dropped down from the root of the tree and quantized very efficiently into a path down the tree, encoded by a single integer. Each node in the vocabulary tree has an associated inverted file index. Indecies back to the new image are then added to the relevant inverted files. This is a very efficient operation that is carried out whenever we want to add an image. Vocabulary tree/inverted index Slide credit: D. Nister

57
**Populating the vocabulary tree/inverted index**

Model images Populating the vocabulary tree/inverted index Slide credit: D. Nister

58
**Populating the vocabulary tree/inverted index**

Model images Populating the vocabulary tree/inverted index Slide credit: D. Nister

59
**Populating the vocabulary tree/inverted index**

Model images Populating the vocabulary tree/inverted index Slide credit: D. Nister

60
**Populating the vocabulary tree/inverted index**

Model images Populating the vocabulary tree/inverted index Slide credit: D. Nister

61
**Looking up a test image Model images Test image**

When we then wish to query on an input image, we quantize the descriptor vectors of the input image in a similar way, and accumulate scores for the images in the database with so called term frequency inverse document frequency (tf-idf). This is effectively an entropy weighting of the information. The winner is the image in the database with the most common information with the input image. Looking up a test image Slide credit: D. Nister

62
**Cool application of large-scale alignment: searching the night sky**

B C D A

Similar presentations

© 2020 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google