# Tamara Berg Object Recognition – BoF models 790-133 Recognizing People, Objects, & Actions 1.

## Presentation on theme: "Tamara Berg Object Recognition – BoF models 790-133 Recognizing People, Objects, & Actions 1."— Presentation transcript:

Tamara Berg Object Recognition – BoF models 790-133 Recognizing People, Objects, & Actions 1

Topic Presentations Hopefully you have met your topic presentations group members? Group 1 – see me to run through slides this week or Monday at the latest (I’m traveling Thurs/Friday). Send me links to 2-3 papers for the class to read. Sign up for class google group (790-133). To find the group go to groups.google.com and search for 790-133 (sorted by date). Use this to post/answer questions related to the class. 2

Object Bag of ‘features’ Bag-of-features models source: Svetlana Lazebnik 3

Exchangeability De Finetti Theorem of exchangeability (bag of words theorem): the joint probability distribution underlying the data is invariant to permutation. 4

Origin 2: Bag-of-words models US Presidential Speeches Tag Cloud http://chir.ag/phernalia/preztags/ Orderless document representation: frequencies of words from a dictionary Salton & McGill (1983) source: Svetlana Lazebnik 5

Bag of words for text  Represent documents as a “bags of words” 6

Example Doc1 = “the quick brown fox jumped” Doc2 = “brown quick jumped fox the” Would a bag of words model represent these two documents differently? 7

Bag of words for images  Represent images as a “bag of features” 8

Bag of features: outline 1.Extract features source: Svetlana Lazebnik 9

Bag of features: outline 1.Extract features 2.Learn “visual vocabulary” source: Svetlana Lazebnik 10

Bag of features: outline 1.Extract features 2.Learn “visual vocabulary” 3.Represent images by frequencies of “visual words” source: Svetlana Lazebnik 11

2. Learning the visual vocabulary Clustering … Slide credit: Josef Sivic 12

2. Learning the visual vocabulary Clustering … Slide credit: Josef Sivic Visual vocabulary 13

K-means clustering (reminder) Want to minimize sum of squared Euclidean distances between points x i and their nearest cluster centers m k Algorithm: Randomly initialize K cluster centers Iterate until convergence: Assign each data point to the nearest center Recompute each cluster center as the mean of all points assigned to it source: Svetlana Lazebnik 14

Example visual vocabulary Fei-Fei et al. 2005 15

Image Representation For a query image Extract features Associate each feature with the nearest cluster center (visual word) Accumulate visual word frequencies over the image Visual vocabulary x x x x x x x x x x

3. Image representation ….. frequency codewords source: Svetlana Lazebnik 17

4. Image classification ….. frequency codewords source: Svetlana Lazebnik 18 Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them? CAR

Image Categorization Choose from many categories What is this? helicopter

Image Categorization Choose from many categories What is this? SVM/NB Csurka et al (Caltech 4/7) Nearest Neighbor Berg et al (Caltech 101) Kernel + SVM Grauman et al (Caltech 101) Multiple Kernel Learning + SVMs Varma et al (Caltech 101) …

Visual Categorization with Bags of Keypoints Gabriella Csurka, Christopher R. Dance, Lixin Fan, Jutta Willamowski, Cédric Bray 21

Data Images in 7 classes: faces, buildings, trees, cars, phones, bikes, books Caltech 4 dataset: faces, airplanes, cars (rear and side), motorbikes, background 22

Method Steps: – Detect and describe image patches. – Assign patch descriptors to a set of predetermined clusters (a visual vocabulary). – Construct a bag of keypoints, which counts the number of patches assigned to each cluster. – Apply a classifier (SVM or Naïve Bayes), treating the bag of keypoints as the feature vector – Determine which category or categories to assign to the image. 23

Bag-of-Keypoints Approach Interesting Point Detection Key Patch Extraction Feature Descriptors Bag of Keypoints Multi-class Classifier 24 Slide credit: Yun-hsueh Liu

25 SIFT Descriptors Interesting Point Detection Key Patch Extraction Feature Descriptors Bag of Keypoints Multi-class Classifier Slide credit: Yun-hsueh Liu

26 Bag of Keypoints (1) Construction of a vocabulary – Kmeans clustering  find “ centroids ” (on all the descriptors we find from all the training images) – Define a “ vocabulary ” as a set of “ centroids ”, where every centroid represents a “ word ”. Interesting Point Detection Key Patch Extraction Feature Descriptors Bag of Keypoints Multi-class Classifier Slide credit: Yun-hsueh Liu

27 Bag of Keypoints (2) Histogram – Counts the number of occurrences of different visual words in each image Interesting Point Detection Key Patch Extraction Feature Descriptors Bag of Keypoints Multi-class Classifier Slide credit: Yun-hsueh Liu

28 Multi-class Classifier In this paper, classification is based on conventional machine learning approaches – Support Vector Machine (SVM) – Naïve Bayes Interesting Point Detection Key Patch Extraction Feature Descriptors Bag of Keypoints Multi-class Classifier Slide credit: Yun-hsueh Liu

SVM 29

Reminder: Linear SVM x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu Slide 30 of 113 s.t.

Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function becomes: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space: Slide credit: Jinwei Gu 31

Nonlinear SVMs: The Kernel Trick  Linear kernel: Examples of commonly-used kernel functions:  Polynomial kernel:  Gaussian (Radial-Basis Function (RBF) ) kernel:  Sigmoid: Slide credit: Jinwei Gu 32

Reminder: Support Vector Machine 1. Choose a kernel function 2. Choose a value for C and any other parameters (e.g. σ) 3. Solve the quadratic programming problem (many software packages available) 4. Classify held out validation instances using the learned model 5. Select the best learned model based on validation accuracy 6. Classify test instances using the final selected model Slide 33 of 113

SVM for image classification Train k binary 1-vs-all SVMs (one per class) For a test instance, evaluate with each classifier Assign the instance to the class with the largest SVM output 34

Naïve Bayes 35

Naïve Bayes Model C – Class F - Features We only specify (parameters): prior over class labels how each feature depends on the class 36

Slide from Dan Klein 37 Example:

Slide from Dan Klein 38

Slide from Dan Klein 39

Percentage of documents in training set labeled as spam/ham Slide from Dan Klein 40

In the documents labeled as spam, occurrence percentage of each word (e.g. # times “the” occurred/# total words). Slide from Dan Klein 41

In the documents labeled as ham, occurrence percentage of each word (e.g. # times “the” occurred/# total words). Slide from Dan Klein 42

Classification The class that maximizes: 43

Classification In practice 44

Classification In practice – Multiplying lots of small probabilities can result in floating point underflow 45

Classification In practice – Multiplying lots of small probabilities can result in floating point underflow – Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities. 46

Classification In practice – Multiplying lots of small probabilities can result in floating point underflow – Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities. – Since log is a monotonic function, the class with the highest score does not change. 47

Classification In practice – Multiplying lots of small probabilities can result in floating point underflow – Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities. – Since log is a monotonic function, the class with the highest score does not change. – So, what we usually compute in practice is: 48

Naïve Bayes on images 49

Naïve Bayes C – Class F - Features We only specify (parameters): prior over class labels how each feature depends on the class 50

Naive Bayes Parameters Problem: Categorize images as one of k object classes using Naïve Bayes classifier: – Classes: object categories (face, car, bicycle, etc) – Features – Images represented as a histogram of visual words. are visual words. treated as uniform. learned from training data – images labeled with category. Probability of a visual word given an image category. 51

52 Multi-class classifier – Naïve Bayes (1) Let V = {v i }, i = 1,…,N, be a visual vocabulary, in which each v i represents a visual word (cluster centers) from the feature space. A set of labeled images I = {I i }. Denote C j to represent our Classes, where j = 1,..,M N(t,i) = number of times v i occurs in image I i Compute P(C j |I i ): Slide credit: Yun-hsueh Liu

53 Multi-class Classifier – Naïve Bayes (2) Goal - Find maximum probability class C j : In order to avoid zero probability, use Laplace smoothing: Slide credit: Yun-hsueh Liu

Results

55

Results 56

Results 57 Results on Dataset 2

Results 58

Results 59

Results 60

Thoughts? Pros? Cons?

Related BoF models pLSA, LDA, … 62

pLSA 63 word topicdocument

pLSA 64

Marginalizing over topics determines the conditional probability: Joint Probability: 65

Fitting the model Need to: Determine the topic vectors common to all documents. Determine the mixture components specific to each document. Goal: a model that gives high probability to the words that appear in the corpus. Maximum likelihood estimation of the parameters is obtained by maximizing the objective function: 66

pLSA on images 67

Discovering objects and their location in images Josef Sivic, Bryan C. Russell, Alexei A. Efros, Andrew Zisserman, William T. Freeman Documents – Images Words – visual words (vector quantized SIFT descriptors) Topics – object categories Images are modeled as a mixture of topics (objects). 68

Goals They investigate three areas: – (i) topic discovery, where categories are discovered by pLSA clustering on all available images. – (ii) classification of unseen images, where topics corresponding to object categories are learnt on one set of images, and then used to determine the object categories present in another set. – (iii) object detection, where you want to determine the location and approximate segmentation of object(s) in each image. 69

(i) Topic Discovery Most likely words for 4 learnt topics (face, motorbike, airplane, car) 70

(ii) Image Classification Confusion table for unseen test images against pLSA trained on images containing four object categories, but no background images. 71

(ii) Image Classification Confusion table for unseen test images against pLSA trained on images containing four object categories, and background images. Performance is not quite as good. 72

(iii) Topic Segmentation 73

(iii) Topic Segmentation 74

(iii) Topic Segmentation 75

Download ppt "Tamara Berg Object Recognition – BoF models 790-133 Recognizing People, Objects, & Actions 1."

Similar presentations