Presentation is loading. Please wait.

Presentation is loading. Please wait.

Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic and Andrew Zisserman.

Similar presentations


Presentation on theme: "Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic and Andrew Zisserman."— Presentation transcript:

1 Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic and Andrew Zisserman

2 Goal Google search for videos Query is an portion of a frame of a video selected by the user

3 Google Text Search Web pages are parsed into words Words are replaced by their root word Stop list to filter common words Remaining words represent that web page as a vector weighted based on word frequency

4 Text Retrieval Efficient retrieval for with an index Text is retrieved by computing its vector of word frequencies, return documents with the closest vectors Consider order and location of words

5 Approach Apply text search properties to image search

6 Video Google: Descriptors Compute two types of covariant regions: Shape Adapted and Maximally Stable Regions computed in grayscale

7 Descriptors

8 Each elliptical region is then represented by a SIFT descriptor Descriptor is averaged over the frames the region exists in Reduce noise: filter regions which do not exist in more than 3 frames Reject 10% of the regions with the largest diagonal covariance matrix

9 Build “Visual Words” Quantize the descriptors into visual words for text retrieval 1000 regions per frame and 128-vector descriptor Select 48 scenes containing 10,000 frames 200K descriptors

10 Clustering descriptors K-means clustering Run several times with random initial assignments D(x1, x2) = sqrt((x1 - x2) T ∑ -1 (x1 - x2)) MS and SA regions are clustered separately

11 Indexing using text retrieval methods Term frequency - inverse document frequency used for weighting the words of a document Retrieval: documents are ranked by their normalized scalar product between the query vector and all the document vectors

12 Image Retrieval Video google: The visual words of the query are the visual words in the user- specified portion of a frame Search the index with the visual words to find all the frames which contain the same word Rank all the results, return the most relevant results

13 Stop List Visual words in the top 5% and bottom 10% are stopped

14 Spatial Consistency Google increases the ranking of documents where the query words appear close together in the searched text In video: 15 nearest neighbors defines search area Regions in this area by the query region vote on each match Re-ranked on the number of votes

15

16 Evaluation Tested on feature length movies with 100K - 150K frames Use one frame per second Ground truth determined by hand Retrieval performance measured by averaged rank of relevant images

17

18

19 Example

20 Questions?

21 Scalable Recognition with a Vocabulary Tree David Nister and Henrik Stewenius

22 Vocabulary Tree Continuation of Video google 10,000 visual words in the database Offline crawling stage to index video takes 10 seconds per frame

23 Vocabulary Tree Too slow for a large database Larger databases result in better retrieval quality More words utilizes the power of the index: less database images must be considered On the fly insertion of new objects into the database

24 Training Training with hierarchical k-means More efficient than k-means 35,000 training frames instead of 400 with video google

25 Feature Extraction Maximally Stable regions used only Build SIFT descriptor from the region

26 Building Vocab Tree Hierarchical k-means, with k being the number of children nodes First run k-means to find k clusters Recursively apply to each cluster L times Visual words become the nodes

27

28 Performance Increasing the size of the vocabulary is logarithmic K = 10, L = 6: one million leaf nodes

29 Retrieval Determine the visual words from the query Propagate the region descriptor down the tree selecting the closest cluster at each level

30 Scoring Determine the relevance of a query image to a database image based on the similarity of their paths down the tree Use TD-IDF to assign weights to the query and database image vector

31 Scoring Use TD-IDF for weights of descriptor vectors Normalized relevance score: L 1 -normalization is the most effective

32 Results Tested on a ground truth database of 6,376 images Groups of four images of the same object

33 Results

34

35 Tested on a database of 1 million images of CD covers Sub-second retrieval times for a database of a million images Performance increases with the number of leaf nodes

36

37 Questions?


Download ppt "Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic and Andrew Zisserman."

Similar presentations


Ads by Google