Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification Tamara Berg CSE 595 Words & Pictures.

Similar presentations


Presentation on theme: "Classification Tamara Berg CSE 595 Words & Pictures."— Presentation transcript:

1 Classification Tamara Berg CSE 595 Words & Pictures

2 HW2 Online after class – Due Oct 10, 11:59pm Use web text descriptions as proxy for class labels. Train color attribute classifiers on web shopping images. Classify test images as to whether they display attributes.

3 Topic Presentations First group starts on Tuesday Audience – please read papers!

4 Example: Image classification apple pear tomato cow dog horse inputdesired output Slide credit: Svetlana Lazebnik

5 Slide from Dan Klein http://yann.lecun.com/exdb/mnist/index.html

6 Slide from Dan Klein

7

8

9

10 Example: Seismic data Body wave magnitude Surface wave magnitude Nuclear explosions Earthquakes Slide credit: Svetlana Lazebnik

11 Slide from Dan Klein

12 The basic classification framework y = f(x) Learning: given a training set of labeled examples {(x 1,y 1 ), …, (x N,y N )}, estimate the parameters of the prediction function f Inference: apply f to a never before seen test example x and output the predicted value y = f(x) outputclassification function input Slide credit: Svetlana Lazebnik

13 Some classification methods 10 6 examples Nearest neighbor Shakhnarovich, Viola, Darrell 2003 Berg, Berg, Malik 2005 … Neural networks LeCun, Bottou, Bengio, Haffner 1998 Rowley, Baluja, Kanade 1998 … Support Vector Machines and Kernels Conditional Random Fields McCallum, Freitag, Pereira 2000 Kumar, Hebert 2003 … Guyon, Vapnik Heisele, Serre, Poggio, 2001 … Slide credit: Antonio Torralba

14 Example: Training and testing Key challenge: generalization to unseen examples Training set (labels known)Test set (labels unknown) Slide credit: Svetlana Lazebnik

15 Slide credit: Dan Klein

16 Slide from Min-Yen Kan Classification by Nearest Neighbor Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in?

17 Slide from Min-Yen Kan Classification by Nearest Neighbor

18 Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc) Slide from Min-Yen Kan

19 Classification by kNN Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan

20 What are the features? What’s the training data? Testing data? Parameters? Classification by kNN

21 Decision tree classifier Example problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1.Alternate: is there an alternative restaurant nearby? 2.Bar: is there a comfortable bar area to wait in? 3.Fri/Sat: is today Friday or Saturday? 4.Hungry: are we hungry? 5.Patrons: number of people in the restaurant (None, Some, Full) 6.Price: price range ($, $$, $$$) 7.Raining: is it raining outside? 8.Reservation: have we made a reservation? 9.Type: kind of restaurant (French, Italian, Thai, Burger) 10.WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60) Slide credit: Svetlana Lazebnik

22 Decision tree classifier Slide credit: Svetlana Lazebnik

23 Decision tree classifier Slide credit: Svetlana Lazebnik

24 Linear classifier Find a linear function to separate the classes f(x) = sgn(w 1 x 1 + w 2 x 2 + … + w D x D ) = sgn(w  x) Slide credit: Svetlana Lazebnik

25 Discriminant Function It can be arbitrary functions of x, such as: Nearest Neighbor Decision Tree Linear Functions Slide credit: Jinwei Gu

26 Linear Discriminant Function g(x) is a linear function: x1x1 x2x2 w T x + b = 0 w T x + b < 0 w T x + b > 0 A hyper-plane in the feature space Slide credit: Jinwei Gu denotes +1 denotes -1 x1x1

27 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers! Slide credit: Jinwei Gu

28 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu

29 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu

30 x1x1 x2x2 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function Infinite number of answers! Which one is the best? denotes +1 denotes -1 Slide credit: Jinwei Gu

31 Large Margin Linear Classifier “safe zone” The linear discriminant function (classifier) with the maximum margin is the best Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best?  strong generalization ability Margin x1x1 x2x2 Linear SVM Slide credit: Jinwei Gu

32 Large Margin Linear Classifier x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu

33 Large Margin Linear Classifier We know that The margin width is: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n Support Vectors Slide credit: Jinwei Gu

34 Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu

35 Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu

36 Large Margin Linear Classifier Formulation: x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Slide credit: Jinwei Gu

37 Solving the Optimization Problem s.t. Quadratic programming with linear constraints s.t. Lagrangian Function Slide credit: Jinwei Gu

38 Solving the Optimization Problem s.t., and Lagrangian Dual Problem Slide credit: Jinwei Gu

39 Solving the Optimization Problem The solution has the form: From KKT condition, we know: Thus, only support vectors have x1x1 x2x2 w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu

40 Solving the Optimization Problem The linear discriminant function is: Notice it relies on a dot product between the test point x and the support vectors x i Slide credit: Jinwei Gu

41 Linear separability Slide credit: Svetlana Lazebnik

42 Non-linear SVMs: Feature Space General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

43 Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function is now: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space: Slide credit: Jinwei Gu

44 Nonlinear SVM: Optimization Formulation: (Lagrangian Dual Problem) such that The solution of the discriminant function is The optimization technique is the same. Slide credit: Jinwei Gu

45 Nonlinear SVMs: The Kernel Trick  Linear kernel: Examples of commonly-used kernel functions:  Polynomial kernel:  Gaussian (Radial-Basis Function (RBF) ) kernel:  Sigmoid: Slide credit: Jinwei Gu

46 Support Vector Machine: Algorithm 1. Choose a kernel function 2. Choose a value for C 3. Solve the quadratic programming problem (many software packages available) 4. Construct the discriminant function from the support vectors Slide credit: Jinwei Gu

47 Some Issues Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt Slide credit: Jinwei Gu

48 Summary: Support Vector Machine 1. Large Margin Classifier – Better generalization ability & less over-fitting 2. The Kernel Trick – Map data points to higher dimensional space in order to make them linearly separable. – Since only dot product is used, we do not need to represent the mapping explicitly. Slide credit: Jinwei Gu

49 A simple algorithm for learning robust classifiers – Freund & Shapire, 1995 – Friedman, Hastie, Tibshhirani, 1998 Provides efficient algorithm for sparse visual feature selection – Tieu & Viola, 2000 – Viola & Jones, 2003 Easy to implement, doesn’t require external optimization tools. Boosting Slide credit: Antonio Torralba

50 Defines a classifier using an additive model: Boosting Strong classifier Weak classifier Weight Features vector Slide credit: Antonio Torralba

51 Defines a classifier using an additive model: We need to define a family of weak classifiers Boosting Strong classifier Weak classifier Weight Features vector from a family of weak classifiers Slide credit: Antonio Torralba

52 Adaboost Slide credit: Antonio Torralba

53 Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Boosting It is a sequential procedure: x t=1 x t=2 xtxt Slide credit: Antonio Torralba

54 Toy example Weak learners from the family of lines h => p(error) = 0.5 it is at chance Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

55 Toy example This one seems to be the best Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = This is a ‘weak classifier’: It performs slightly better than chance. Slide credit: Antonio Torralba

56 Toy example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

57 Toy example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

58 Toy example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

59 Toy example We set a new problem for which the previous weak classifier performs at chance again Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba

60 Toy example The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers. f1f1 f2f2 f3f3 f4f4 Slide credit: Antonio Torralba

61 Adaboost Slide credit: Antonio Torralba


Download ppt "Classification Tamara Berg CSE 595 Words & Pictures."

Similar presentations


Ads by Google