6Improving SIFTHellinger or χ2 measures outperform Euclidean distance when comparing histograms, examples in image categorization, object and texture classification etc.
7Hellinger distanceHellinger kernel (Bhattacharyya’s coefficient) for L1 normalized histograms x and y:Intuition: Euclidean distance can be dominated by large bin values, using Hellinger distance is more sensitive to smaller bin values
8Hellinger distanceHellinger kernel (Bhattacharyya’s coefficient) for L1 normalized histograms x and y:Explicit feature map of x into x’ :L1 normalize xelement-wise square root x to give x’then x’ is L2 normalizedComputing Euclidean distance in the feature map space is equivalent to Hellinger distance in the original space, since:RootSIFT
9RootSIFT: properties rootsift= sqrt( sift / sum(sift) ); Extremely simple to implement and useOne line of Matlab code to convert SIFT to RootSIFT:Conversion from SIFT to RootSIFTcan be done on-the-flyNo need to re-compute stored SIFT descriptors for large image datasetsNo added storage requirementsApplications throughout computer visionk-means, approximate nearest neighbour methods, soft-assignment to visual words, Fisher vector coding, PCA, descriptor learning, hashing methods, product quantization etc.rootsift= sqrt( sift / sum(sift) );
12RootSIFT: results : bag of visual words with: tf-idf ranking or tf-idf ranking with spatial reranking J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In Proc. CVPR, 2007.
13Second thing everyone should know Discriminative query expansionTrain a linear SVM classifierUse query expanded BoW vectors as positive training dataUse low ranked images as negative training dataRank images on their signed distance from the decision boundary
14Query expansion O. Chum, J. Philbin, J. Sivic, M. Isard, and A. Zisserman.Total recall: Automatic query expansion with a generative feature model for object retrieval. In Proc. ICCV, 2007.
15Average query expansion (AQE) BoW vectors from spatially verified regions are used to build a richer model for the queryAverage query expansion (AQE) Use the mean of the BoW vectors to re-queryOther methods exist (e.g. transitive closure, multiple image resolution) but the performance is similar to AQE while they are slower as several queries are issuedAverage QE is the de facto standard O. Chum, J. Philbin, J. Sivic, M. Isard, and A. Zisserman. Total recall: Automatic query expansion with a generative feature model for object retrieval. In Proc. ICCV, 2007.
19Third thing everyone should know Database-side feature augmentation
20Database-side feature augmentation Query expansion improves retrieval performance by obtaining a better model for the queryNatural complement: obtain a better model for the database images Augment database images with features from other images of the same object
21Image graph Construct an image graph Nodes: images Edges connect images containing the same object J. Philbin and A. Zisserman. Object mining using a matching graph on very large image collections. In Proc. ICVGIP, 2008.
22Database-side feature augmentation (AUG) Turcot and Lowe 2009:Obtain a better model for database imagesEach image is augmented with all visual words from neighbouring images T. Turcot and D. G. Lowe. Better matching with fewer features: The selection of useful features in large database recognition problems.In ICCV 2009.
23Spatial database-side feature aug. (SPAUG) AUG: Augment with all visual words from neighboring imagesImproves recall but precision is sacrificedSpatial AUG: Only augment with visible visual words
24Spatial db-side feature aug. (SPAUG): results 28% less features are augmented than in the original methodThe original approach introduces a large number of irrelevant and detrimental visual wordsUsing RootSIFT:
25Final retrieval system Combine all the improvements into one systemRootSIFTDiscriminative query expansionSpatial database-side feature augmentation
26Final resultsNew state of the art on all three datasets
27Conclusions RootSIFT Discriminative query expansion Improves performance in every single experimentEvery system which uses SIFT is ready to use RootSIFTEasy to implement, no added computational or storage costDiscriminative query expansionConsistently outperforms average query expansionAt least as efficient as average QE, no reasons not to use itDatabase-side feature augmentationUseful for increasing recallOur extension improves precision but increases storage cost