Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.

Similar presentations


Presentation on theme: "CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015."— Presentation transcript:

1 CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015

2 Homework HW3: due Nov. 3 – Note: You need to perform k-means on features from multiple frames joined together, not one k- means per frame! HW4: due Nov. 24 HW5: half-length, still 10% of grade, out Nov. 24, due Dec. 10

3 Plan for today Support vector machines Bias-variance tradeoff Scene recognition: Spatial pyramid matching

4 The machine learning framework y = f(x) Training: given a training set of labeled examples {(x 1,y 1 ), …, (x N,y N )}, estimate the prediction function f by minimizing the prediction error on the training set Testing: apply f to a never before seen test example x and output the predicted value y = f(x) outputprediction function image feature Slide credit: L. Lazebnik

5 Nearest neighbor classifier f(x) = label of the training example nearest to x All we need is a distance function for our inputs No training required! Test example Training examples from class 1 Training examples from class 2 Slide credit: L. Lazebnik

6 K-Nearest Neighbors classification k = 5 Slide credit: D. Lowe For a new point, find the k closest points from training data Labels of the k points “vote” to classify If query lands here, the 5 NN consist of 3 negatives and 2 positives, so we classify it as negative. Black = negative Red = positive

7 Scene Matches [Hays and Efros. im2gps: Estimating Geographic Information from a Single Image. CVPR 2008.] Slides: James Hays

8 Nearest Neighbors: Pros and Cons NN pros: +Simple to implement +Decision boundaries not necessarily linear +Works for any number of classes +Nonparametric method NN cons: –Need good distance function –Slow at test time (large search problem to find neighbors) –Storage of data Adapted from L. Lazebnik

9 Discriminative classifiers Learn a simple function of the input features that correctly predicts the true labels on the training set Training Goals 1.Accurate classification of training data 2.Correct classifications are confident 3.Classification function is simple Slide credit: D. Hoiem

10 Linear classifier Find a linear function to separate the classes f(x) = sgn(w 1 x 1 + w 2 x 2 + … + w D x D ) = sgn(w  x) Svetlana Lazebnik

11 Lines in R 2 Let Kristen Grauman

12 Lines in R 2 Let Kristen Grauman

13 Lines in R 2 Let Kristen Grauman

14 Lines in R 2 Let distance from point to line Kristen Grauman

15 Lines in R 2 Let distance from point to line Kristen Grauman

16 Linear classifiers Find linear function to separate positive and negative examples Which line is best? C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

17 Support vector machines Discriminative classifier based on optimal separating line (for 2d case) Maximize the margin between the positive and negative training examples C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

18 Support vector machines Want line that maximizes the margin. Margin Support vectors C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition For support, vectors, wx+b=-1 wx+b=0 wx+b=1

19 Support vector machines Want line that maximizes the margin. Support vectors For support, vectors, wx+b=-1 wx+b=0 wx+b=1 Distance between point and line: For support vectors: C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition Margin

20 Support vector machines Want line that maximizes the margin. Margin Support vectors For support, vectors, wx+b=-1 wx+b=0 wx+b=1 Distance between point and line: Therefore, the margin is 2 / ||w|| C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

21 Finding the maximum margin line 1.Maximize margin 2/||w|| 2.Correctly classify all training data points: Quadratic optimization problem: Minimize Subject to y i (w·x i +b) ≥ 1 One constraint for each training point. Note sign trick. C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

22 Finding the maximum margin line Solution: Support vector Learned weight C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

23 Finding the maximum margin line Solution: b = y i – w·x i (for any support vector) Classification function: Notice that it relies on an inner product between the test point x and the support vectors x i (Solving the optimization problem also involves computing the inner products x i · x j between all pairs of training points) If f(x) < 0, classify as negative, otherwise classify as positive. C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998A Tutorial on Support Vector Machines for Pattern Recognition

24 Datasets that are linearly separable work out great: But what if the dataset is just too hard? We can map it to a higher-dimensional space: 0x 0 x 0 x x2x2 Nonlinear SVMs Andrew Moore

25 Φ: x → φ(x) General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable: Andrew Moore Nonlinear SVMs

26 Nonlinear kernel: Example Consider the mapping x2x2 Svetlana Lazebnik

27 The “Kernel Trick” The linear classifier relies on dot product between vectors K(x i, x j ) = x i · x j If every data point is mapped into high-dimensional space via some transformation Φ: x i → φ(x i ), the dot product becomes: K(x i, x j ) = φ(x i ) · φ(x j ) A kernel function is similarity function that corresponds to an inner product in some expanded feature space The kernel trick: instead of explicitly computing the lifting transformation φ(x), define a kernel function K such that: K(x i, x j ) = φ(x i ) · φ(x j ) Andrew Moore

28 Examples of kernel functions Linear: Gaussian RBF: Histogram intersection: Andrew Moore

29 Allowing misclassifications Maximize marginMinimize misclassification Slack variable The w that minimizes… Misclassification cost # data samples

30 What about multi-class SVMs? Unfortunately, there is no “definitive” multi-class SVM formulation In practice, we have to obtain a multi-class SVM by combining multiple two-class SVMs One vs. others – Training: learn an SVM for each class vs. the others – Testing: apply each SVM to the test example, and assign it to the class of the SVM that returns the highest decision value One vs. one – Training: learn an SVM for each pair of classes – Testing: each learned SVM “votes” for a class to assign to the test example Svetlana Lazebnik

31 SVMs for recognition 1.Define your representation for each example. 2.Select a kernel function. 3.Compute pairwise kernel values between labeled examples 4.Use this “kernel matrix” to solve for SVM support vectors & weights. 5.To classify a new example: compute kernel values between new input and support vectors, apply weights, check sign of output. Kristen Grauman

32 Example: learning gender with SVMs Moghaddam and Yang, Learning Gender with Support Faces, TPAMI 2002. Moghaddam and Yang, Face & Gesture 2000. Kristen Grauman

33 Training examples: –1044 males –713 females Experiment with various kernels, select Gaussian RBF Learning gender with SVMs Kristen Grauman

34 Support Faces Moghaddam and Yang, Learning Gender with Support Faces, TPAMI 2002.

35

36 Gender perception experiment: How well can humans do? Subjects: –30 people (22 male, 8 female) –Ages mid-20’s to mid-40’s Test data: –254 face images (6 males, 4 females) –Low res and high res versions Task: –Classify as male or female, forced choice –No time limit Moghaddam and Yang, Face & Gesture 2000.

37 Gender perception experiment: How well can humans do? Error

38 Human vs. Machine SVMs performed better than any single human test subject, at either resolution Kristen Grauman

39 Hardest examples for humans Moghaddam and Yang, Face & Gesture 2000.

40 SVMs: Pros and cons Pros Many publicly available SVM packages: http://www.csie.ntu.edu.tw/~cjlin/libsvm/ http://www.csie.ntu.edu.tw/~cjlin/libsvm/ or use built-in Matlab version (but slower) Kernel-based framework is very powerful, flexible Often a sparse set of support vectors – compact at test time Work very well in practice, even with very small training sample sizes Cons No “direct” multi-class SVM, must combine two-class SVMs Can be tricky to select best kernel function for a problem Computation, memory –During training time, must compute matrix of kernel values for every pair of examples –Learning can take a very long time for large-scale problems Adapted from Lana Lazebnik


Download ppt "CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015."

Similar presentations


Ads by Google