Presentation is loading. Please wait.

Presentation is loading. Please wait.

Categorization by Learning and Combing Object Parts B. Heisele, T. Serre, M. Pontil, T. Vetter, T. Poggio. Presented by Manish Jethwa.

Similar presentations


Presentation on theme: "Categorization by Learning and Combing Object Parts B. Heisele, T. Serre, M. Pontil, T. Vetter, T. Poggio. Presented by Manish Jethwa."— Presentation transcript:

1 Categorization by Learning and Combing Object Parts B. Heisele, T. Serre, M. Pontil, T. Vetter, T. Poggio. Presented by Manish Jethwa

2 Overview Learn discriminatory components of objects with Support Vector Machine (SVM) classifiers.

3 Background Global Approach –Attempt to classify the entire object –Successful when applied to problems in which the object pose is fixed. Component-based techniques –Individual components vary less when object pose changes than whole object –Useable even when some of the components are occluded.

4 Linear Support Vector Machines Linear SVMs are used to discriminate between two classes by determining the separating hyperplane. Support Vectors

5 Decision function The decision function of the SVM has the form: Number of training data points Training data points Class label {-1,1}Adjustable coefficients -solution of quadratic programming problem -positive weights for Support Vectors -zero for all other data points New data point f ( x ) defines a hyperplane dividing The data. The sign of f ( x ) indicates the class x. Bias i=1 l f (x)= ∑ α i y i + b

6 Significance of α i Correspond to the weights of the support vectors. Learned from training data set. Used to compute the margin M of the support vectors to the hyperplane. Margin M = ( √∑ i l  i ) -1

7 Non-separable Data The notion of a margin extends to non-separable data also. Misclassified points result in errors. The hyperplane is now defined by maximizing the margin while minimizing the summed error. The expected error probability of the SVM satisfies the following bound: EP err ≤l -1 E[D 2 /M 2 ] Diameter of sphere containing all training data

8 Measuring Error Probability or error is proportional to the following ratio: ρ = D 2 /M 2 Renders ρ, and therefore the probability of error, invariant to scale. D1D1D1D1 M1M1M1M1 D2D2D2D2 M2M2M2M2 = D2D2M2M2D2D2M2M2 D1D1M1M1D1D1M1M1 ρ= ρ 2 ρ 1 = ρ 2

9 Learning Components Expansion left ρ

10 Learning Facial Components Extracting face components is time consuming –Requires manually extracting each component from all training images. Use textured head models instead –Automatically produce a large number of faces under differing illumination and poses Seven textured head models used to generate 2,457 face images of size 58x58

11 Negative Training Set Use extract 58x58 patches from 502 non-face images to give 10,209 negative training points. Train SVM classifier on this data, then add false positives to the negative training set. Increases negative training set with those images which look most like faces.

12 Learned Components Start with fourteen manually selected 5x5 seed regions. The eyes (17x17 pixels) The nose (15x20 pixels) The mouth (31x15 pixels) The cheeks (21x20 pixels) The lip (13x16 pixels) The nostrils (22x12 pixels) The corners of the mouth (15x20 pixels) The eyebrows (15x20 pixels) The bridge of the nose (15x20 pixels)

13 Combining Components Combining Classifier Linear SVM Shift 58x58 window over input image Determine maximum output and its location Final decision face / background Shift component Experts over 58x58 window Left Eye Expert Linear SVM Nose Expert Linear SVM Mouth Expert Linear SVM

14 Experiments


Download ppt "Categorization by Learning and Combing Object Parts B. Heisele, T. Serre, M. Pontil, T. Vetter, T. Poggio. Presented by Manish Jethwa."

Similar presentations


Ads by Google