Presentation is loading. Please wait.

Presentation is loading. Please wait.

Three things everyone should know to improve object retrieval

Similar presentations


Presentation on theme: "Three things everyone should know to improve object retrieval"— Presentation transcript:

1 Three things everyone should know to improve object retrieval
Relja Arandjelovi´c Andrew Zisserman Department of Engineering Science, University of Oxford

2 Objective Find all instances of an object in a large dataset
Do it instantly Be robust to scale, viewpoint, lighting, partial occlusion

3 Three things everyone should know
RootSIFT Discriminative query expansion Database-side feature augmentation

4 Bag-of-visual-words particular object retrieval

5 First thing everyone should know
RootSIFT

6 Improving SIFT Hellinger or χ2 measures outperform Euclidean distance when comparing histograms, examples in image categorization, object and texture classification etc.

7 Hellinger distance Hellinger kernel (Bhattacharyya’s coefficient) for L1 normalized histograms x and y: Intuition: Euclidean distance can be dominated by large bin values, using Hellinger distance is more sensitive to smaller bin values

8 Hellinger distance Hellinger kernel (Bhattacharyya’s coefficient) for L1 normalized histograms x and y: Explicit feature map of x into x’ : L1 normalize x element-wise square root x to give x’ then x’ is L2 normalized Computing Euclidean distance in the feature map space is equivalent to Hellinger distance in the original space, since: RootSIFT

9 RootSIFT: properties rootsift= sqrt( sift / sum(sift) );
Extremely simple to implement and use One line of Matlab code to convert SIFT to RootSIFT: Conversion from SIFT to RootSIFTcan be done on-the-fly No need to re-compute stored SIFT descriptors for large image datasets No added storage requirements Applications throughout computer vision k-means, approximate nearest neighbour methods, soft-assignment to visual words, Fisher vector coding, PCA, descriptor learning, hashing methods, product quantization etc. rootsift= sqrt( sift / sum(sift) );

10 Bag-of-visual-words particular object retrieval

11 Oxford buildings dataset

12 RootSIFT: results [23]: bag of visual words with: tf-idf ranking
or tf-idf ranking with spatial reranking [23] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In Proc. CVPR, 2007.

13 Second thing everyone should know
Discriminative query expansion Train a linear SVM classifier Use query expanded BoW vectors as positive training data Use low ranked images as negative training data Rank images on their signed distance from the decision boundary

14 Query expansion [6] O. Chum, J. Philbin, J. Sivic, M. Isard, and A. Zisserman. Total recall: Automatic query expansion with a generative feature model for object retrieval. In Proc. ICCV, 2007.

15 Average query expansion (AQE)
BoW vectors from spatially verified regions are used to build a richer model for the query Average query expansion (AQE) [6] Use the mean of the BoW vectors to re-query Other methods exist (e.g. transitive closure, multiple image resolution) but the performance is similar to AQE while they are slower as several queries are issued Average QE is the de facto standard [6] O. Chum, J. Philbin, J. Sivic, M. Isard, and A. Zisserman. Total recall: Automatic query expansion with a generative feature model for object retrieval. In Proc. ICCV, 2007.

16 Discriminative query expansion

17 Discriminative Query Expansion: results
Significant boost in performance, at no added cost mAP on Oxford 105k:

18 DQE: results, Oxford 105k (RootSIFT)

19 Third thing everyone should know
Database-side feature augmentation

20 Database-side feature augmentation
Query expansion improves retrieval performance by obtaining a better model for the query Natural complement: obtain a better model for the database images [29] Augment database images with features from other images of the same object

21 Image graph Construct an image graph[26] Nodes: images
Edges connect images containing the same object [26] J. Philbin and A. Zisserman. Object mining using a matching graph on very large image collections. In Proc. ICVGIP, 2008.

22 Database-side feature augmentation (AUG)
Turcot and Lowe 2009[29]: Obtain a better model for database images Each image is augmented with all visual words from neighbouring images [29] T. Turcot and D. G. Lowe. Better matching with fewer features: The selection of useful features in large database recognition problems.In ICCV 2009.

23 Spatial database-side feature aug. (SPAUG)
AUG: Augment with all visual words from neighboring images Improves recall but precision is sacrificed Spatial AUG: Only augment with visible visual words

24 Spatial db-side feature aug. (SPAUG): results
28% less features are augmented than in the original method The original approach introduces a large number of irrelevant and detrimental visual words Using RootSIFT:

25 Final retrieval system
Combine all the improvements into one system RootSIFT Discriminative query expansion Spatial database-side feature augmentation

26 Final results New state of the art on all three datasets

27 Conclusions RootSIFT Discriminative query expansion
Improves performance in every single experiment Every system which uses SIFT is ready to use RootSIFT Easy to implement, no added computational or storage cost Discriminative query expansion Consistently outperforms average query expansion At least as efficient as average QE, no reasons not to use it Database-side feature augmentation Useful for increasing recall Our extension improves precision but increases storage cost


Download ppt "Three things everyone should know to improve object retrieval"

Similar presentations


Ads by Google