Presentation is loading. Please wait.

Presentation is loading. Please wait.

Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.

Similar presentations


Presentation on theme: "Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear."— Presentation transcript:

1 Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear classifiers

2 Image features Spatial support: Pixel or local patchSegmentation region Bounding boxWhole image

3 Image features We will focus mainly on global image features for whole-image classification tasks GIST descriptors Bags of features Spatial pyramids

4 GIST descriptors Oliva & Torralba (2001) http://people.csail.mit.edu/torralba/code/spatialenvelope/

5 Bags of features

6 Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures, it is the identity of the textons, not their spatial arrangement, that matters Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003

7 Origin 1: Texture recognition Universal texton dictionary histogram Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003

8 Origin 2: Bag-of-words models Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

9 Origin 2: Bag-of-words models US Presidential Speeches Tag Cloud http://chir.ag/projects/preztags/ Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

10 Origin 2: Bag-of-words models US Presidential Speeches Tag Cloud http://chir.ag/projects/preztags/ Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

11 Origin 2: Bag-of-words models US Presidential Speeches Tag Cloud http://chir.ag/projects/preztags/ Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983)

12 1.Extract local features 2.Learn “visual vocabulary” 3.Quantize local features using visual vocabulary 4.Represent images by frequencies of “visual words” Bag-of-features steps

13 1. Local feature extraction Regular grid or interest regions

14 Normalize patch Detect patches Compute descriptor Slide credit: Josef Sivic 1. Local feature extraction

15 … Slide credit: Josef Sivic

16 2. Learning the visual vocabulary … Slide credit: Josef Sivic

17 2. Learning the visual vocabulary Clustering … Slide credit: Josef Sivic

18 2. Learning the visual vocabulary Clustering … Slide credit: Josef Sivic Visual vocabulary

19 Review: K-means clustering Want to minimize sum of squared Euclidean distances between features x i and their nearest cluster centers m k Algorithm: Randomly initialize K cluster centers Iterate until convergence: Assign each feature to the nearest center Recompute each cluster center as the mean of all features assigned to it

20 Example codebook … Source: B. Leibe Appearance codebook

21 Another codebook Appearance codebook … Source: B. Leibe

22 1.Extract local features 2.Learn “visual vocabulary” 3.Quantize local features using visual vocabulary 4.Represent images by frequencies of “visual words” Bag-of-features steps

23 Visual vocabularies: Details How to choose vocabulary size? Too small: visual words not representative of all patches Too large: quantization artifacts, overfitting Right size is application-dependent Improving efficiency of quantization Vocabulary trees (Nister and Stewenius, 2005) Improving vocabulary quality Discriminative/supervised training of codebooks Sparse coding, non-exclusive assignment to codewords More discriminative bag-of-words representations Fisher Vectors (Perronnin et al., 2007), VLAD (Jegou et al., 2010) Incorporating spatial information

24 Bags of features for action recognition Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, IJCV 2008.Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words Space-time interest points

25 Bags of features for action recognition Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, IJCV 2008.Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words

26 Spatial pyramids level 0 Lazebnik, Schmid & Ponce (CVPR 2006)

27 Spatial pyramids level 0 level 1 Lazebnik, Schmid & Ponce (CVPR 2006)

28 Spatial pyramids level 0 level 1 level 2 Lazebnik, Schmid & Ponce (CVPR 2006)

29 Results: Scene category dataset Multi-class classification results (100 training images per class)

30 Results: Caltech101 dataset Multi-class classification results (30 training images per class)


Download ppt "Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear."

Similar presentations


Ads by Google