Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification II Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,

Similar presentations


Presentation on theme: "Classification II Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,"— Presentation transcript:

1 Classification II Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew Moore, Percy Liang, Luke Zettlemoyer, Rob Pless, Killian Weinberger, Deva Ramanan 1

2 Announcements HW3 due tomorrow, 11:59pm Midterm2 next Wednesday, Nov 4 –Bring a simple calculator –You may bring one 3x5 notecard of notes (both sides) Monday, Nov 2 we will have in class practice questions 2

3 Midterm Topic List Probability –Random variables –Axioms of probability –Joint, marginal, conditional probability distributions –Independence and conditional independence –Product rule, chain rule, Bayes rule Bayesian Networks General –Structure and parameters –Calculating joint and conditional probabilities –Independence in Bayes Nets (Bayes Ball) Bayesian Inference –Exact Inference (Inference by Enumeration, Variable Elimination) –Approximate Inference (Forward Sampling, Rejection Sampling, Likelihood Weighting) –Networks for which efficient inference is possible 3

4 Midterm Topic List Naïve Bayes –Parameter learning including Laplace smoothing –Likelihood, prior, posterior –Maximum likelihood (ML), maximum a posteriori (MAP) inference –Application to spam/ham classification and image classification HMMs –Markov Property –Markov Chains –Hidden Markov Model (initial distribution, transitions, emissions) –Filtering (forward algorithm) –Application to speech recognition and robot localization 4

5 Midterm Topic List Machine Learning –Unsupervised/supervised/semi-supervised learning –K Means clustering –Hierarchical clustering (agglomerative, divisive) –Training, tuning, testing, generalization –Nearest Neighbor –Decision Trees –Boosting –Application of algorithms to research problems (e.g. visual word discovery, pose estimation, im2gps, scene completion, face detection) 5

6 The basic classification framework y = f(x) Learning: given a training set of labeled examples {(x 1,y 1 ), …, (x N,y N )}, estimate the parameters of the prediction function f Inference: apply f to a never before seen test example x and output the predicted value y = f(x) outputclassification function input

7 Classification by Nearest Neighbor Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in? 7

8 Classification by Nearest Neighbor 8

9 Classify the test document as the class of the document “nearest” to the query document (use vector similarity, e.g. Euclidean distance, to find most similar doc) 9

10 Classification by kNN Classify the test document as the majority class of the k documents “nearest” to the query document. 10

11 Decision tree classifier Example problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1.Alternate: is there an alternative restaurant nearby? 2.Bar: is there a comfortable bar area to wait in? 3.Fri/Sat: is today Friday or Saturday? 4.Hungry: are we hungry? 5.Patrons: number of people in the restaurant (None, Some, Full) 6.Price: price range ($, $$, $$$) 7.Raining: is it raining outside? 8.Reservation: have we made a reservation? 9.Type: kind of restaurant (French, Italian, Thai, Burger) 10.WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60) 11

12 Decision tree classifier 12

13 Decision tree classifier 13

14 14 Shall I play tennis today?

15 15

16 16 How do we choose the best attribute? Leaf nodes Choose next attribute for splitting

17 17 Criterion for attribute selection Which is the best attribute? –The one which will result in the smallest tree –Heuristic: choose the attribute that produces the “ purest ” nodes Need a good measure of purity!

18 18 Information Gain Which test is more informative? Humidity <=75%>75% <=20 >20 Wind

19 19 Information Gain Impurity/Entropy (informal) –Measures the level of impurity in a group of examples

20 20 Impurity Very impure group Less impure Minimum impurity

21 21 Entropy: a common way to measure impurity Entropy = p i is the probability of class i Compute it as the proportion of class i in the set.

22 22 2-Class Cases: What is the entropy of a group in which all examples belong to the same class? entropy = - 1 log 2 1 = 0 What is the entropy of a group with 50% in either class? entropy = -0.5 log 2 0.5 – 0.5 log 2 0.5 =1 Minimum impurity Maximum impurity

23 23 Information Gain We want to determine which attribute in a given set of training feature vectors is most useful for discriminating between the classes to be learned. Information gain tells us how useful a given attribute of the feature vectors is. We can use it to decide the ordering of attributes in the nodes of a decision tree.

24 24 Calculating Information Gain Entire population (30 instances) 17 instances 13 instances (Weighted) Average Entropy of Children = Information Gain= 0.996 - 0.615 = 0.38 Information Gain = entropy(parent) – [weighted average entropy(children)] parent entropy child entropy child entropy

25 25 e.g. based on information gain

26 Model Ensembles

27

28

29

30 Random Forests 30 A variant of bagging proposed by Breiman Classifier consists of a collection of decision tree- structure classifiers. Each tree cast a vote for the class of input x.

31 A simple algorithm for learning robust classifiers –Freund & Shapire, 1995 –Friedman, Hastie, Tibshhirani, 1998 Provides efficient algorithm for sparse visual feature selection –Tieu & Viola, 2000 –Viola & Jones, 2003 Easy to implement, doesn’t require external optimization tools. Used for many real problems in AI. Boosting 31

32 Defines a classifier using an additive model: Boosting Strong classifier Weak classifier Weight Input feature vector 32

33 Defines a classifier using an additive model: We need to define a family of weak classifiers Boosting Strong classifier Weak classifier Weight Input feature vector Selected from a family of weak classifiers 33

34 Adaboost Input: training samples Initialize weights on samples For T iterations: Select best weak classifier based on weighted error Update sample weights Output: final strong classifier (combination of selected weak classifier predictions)

35 Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Boosting It is a sequential procedure: x t=1 x t=2 xtxt 35

36 Toy example Weak learners from the family of lines h => p(error) = 0.5 it is at chance Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = 36

37 Toy example This one seems to be the best Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = This is a ‘weak classifier’: It performs slightly better than chance. 37

38 Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = 38

39 Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = 39

40 Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = 40

41 Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = 41

42 Toy example The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers. f1f1 f2f2 f3f3 f4f4 42

43 Adaboost Input: training samples Initialize weights on samples For T iterations: Select best weak classifier based on weighted error Update sample weights Output: final strong classifier (combination of selected weak classifier predictions)

44 Boosting for Face Detection 44

45 Face detection features ? classify +1 face -1 not face We slide a window over the image Extract features for each window Classify each window into face/non-face x F(x)y ??

46 What is a face? Eyes are dark (eyebrows+shadows) Cheeks and forehead are bright. Nose is bright Paul Viola, Michael Jones, Robust Real-time Object Detection, IJCV 04

47 Basic feature extraction Information type: –intensity Sum over: –gray and white rectangles Output: gray-white Separate output value for –Each type –Each scale –Each position in the window FEX(im)=x=[x 1,x 2,…….,x n ] Paul Viola, Michael Jones, Robust Real-time Object Detection, IJCV 04 x 120 x 357 x 629 x 834

48 Decision trees Stump: –1 root –2 leaves If x i > a then positive else negative Very simple “Weak classifier” Paul Viola, Michael Jones, Robust Real-time Object Detection, IJCV 04 x 120 x 357 x 629 x 834

49 Summary: Face detection Use decision stumps as week classifiers Use boosting to build a strong classifier Use sliding window to detect the face x 120 x 357 x 629 x 834 X 234 >1.3 Non-face +1 Face Yes No

50 Discriminant Function It can be arbitrary functions of x, such as: Nearest Neighbor Decision Tree Linear Functions 50


Download ppt "Classification II Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,"

Similar presentations


Ads by Google