Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Machine Learning – Lecture 7 Model Combination & Boosting 12.05.2009 Bastian Leibe.

Slides:



Advertisements
Similar presentations
Ensemble Learning Reading: R. Schapire, A brief introduction to boosting.
Advertisements

Data Mining Classification: Alternative Techniques
An Introduction of Support Vector Machine
Support Vector Machines
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
CMPUT 466/551 Principal Source: CMU
AdaBoost & Its Applications
Longin Jan Latecki Temple University
Face detection Many slides adapted from P. Viola.
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Sparse vs. Ensemble Approaches to Supervised Learning
A Brief Introduction to Adaboost
Ensemble Learning: An Introduction
Introduction to Boosting Aristotelis Tsirigos SCLT seminar - NYU Computer Science.
Sparse vs. Ensemble Approaches to Supervised Learning
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Face Detection CSE 576. Face detection State-of-the-art face detection demo (Courtesy Boris Babenko)Boris Babenko.
For Better Accuracy Eick: Ensemble Learning
An Introduction to Support Vector Machines Martin Law.
Face Detection using the Viola-Jones Method
Chapter 10 Boosting May 6, Outline Adaboost Ensemble point-view of Boosting Boosting Trees Supervised Learning Methods.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 14 Introduction to Regression Bastian Leibe.
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
Perceptual and Sensory Augmented Computing Advanced Machine Learning Winter’12 Advanced Machine Learning Lecture 3 Linear Regression II Bastian.
ECE 8443 – Pattern Recognition Objectives: Bagging and Boosting Cross-Validation ML and Bayesian Model Comparison Combining Classifiers Resources: MN:
Window-based models for generic object detection Mei-Chen Yeh 04/24/2012.
Benk Erika Kelemen Zsolt
Lecture 29: Face Detection Revisited CS4670 / 5670: Computer Vision Noah Snavely.
Face detection Slides adapted Grauman & Liebe’s tutorial
Machine Learning – Lecture 6
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
An Introduction to Support Vector Machines (M. Law)
Ensemble Methods: Bagging and Boosting
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
Perceptual and Sensory Augmented Computing Machine Learning WS 13/14 Machine Learning – Lecture 3 Probability Density Estimation II Bastian.
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Lecture notes for Stat 231: Pattern Recognition and Machine Learning 1. Stat 231. A.L. Yuille. Fall 2004 AdaBoost.. Binary Classification. Read 9.5 Duda,
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 15 Regression II Bastian Leibe RWTH Aachen.
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 11 AdaBoost Bastian Leibe RWTH Aachen
Linear Models for Classification
Lecture 09 03/01/2012 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
The Viola/Jones Face Detector A “paradigmatic” method for real-time object detection Training is slow, but detection is very fast Key ideas Integral images.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
Ensemble Methods in Machine Learning
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Ensemble Methods: Bagging, Boosting Readings: Murphy 16.4; Hastie 16.
Lecture 15: Eigenfaces CS6670: Computer Vision Noah Snavely.
Classification Ensemble Methods 1
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bagging and Boosting Cross-Validation ML.
Notes on HW 1 grading I gave full credit as long as you gave a description, confusion matrix, and working code Many people’s descriptions were quite short.
Computational Intelligence: Methods and Applications Lecture 24 SVM in the non-linear case Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 5 Linear Discriminant Functions Bastian Leibe.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
AdaBoost Algorithm and its Application on Object Detection Fayin Li.
1 Machine Learning Lecture 8: Ensemble Methods Moshe Koppel Slides adapted from Raymond J. Mooney and others.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Machine Learning: Ensemble Methods
Reading: R. Schapire, A brief introduction to boosting
Ch 14. Combining Models Pattern Recognition and Machine Learning, C. M
COMP61011 : Machine Learning Ensemble Models
ECE 5424: Introduction to Machine Learning
Data Mining Practical Machine Learning Tools and Techniques
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei.
Ensemble learning.
Support Vector Machine _ 2 (SVM)
Ensemble learning Reminder - Bagging of Trees Random Forest
Lecture 29: Face Detection Revisited
Presentation transcript:

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Machine Learning – Lecture 7 Model Combination & Boosting Bastian Leibe RWTH Aachen TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A AA A A A A A AAAAAA A AAAA A A A AA A A Many slides adapted from B. Schiele

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Course Outline Fundamentals (2 weeks)  Bayes Decision Theory  Probability Density Estimation Discriminative Approaches (5 weeks)  Linear Discriminant Functions  Statistical Learning Theory  Support Vector Machines  Boosting, Decision Trees Generative Models (5 weeks)  Bayesian Networks  Markov Random Fields Regression Problems (2 weeks)  Gaussian Processes B. Leibe 2

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Recap: SVM for Non-Separable Data Slack variables  One slack variable » n ¸ 0 for each training data point. Interpretation  » n = 0 for points that are on the correct side of the margin.  » n = | t n – y ( x n )| for all other points.  We do not have to set the slack variables ourselves!  They are jointly optimized together with w. 3 B. Leibe Point on decision boundary: » n = 1 Misclassified point: » n > 1

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Recap: SVM – New Dual Formulation New SVM Dual: Maximize under the conditions This is again a quadratic programming problem  Solve as before… 4 B. Leibe Slide adapted from Bernt Schiele This is all that changed! see Exercise 2.5!

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Recap: Nonlinear SVMs General idea: The original input space can be mapped to some higher-dimensional feature space where the training set is separable: 5 © : x → Á ( x ) Slide credit: Raymond Mooney

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Recap: The Kernel Trick Important observation  Á ( x ) only appears in the form of dot products Á ( x ) T Á ( y ) :  Define a so-called kernel function k ( x, y ) = Á ( x ) T Á ( y ).  Now, in place of the dot product, use the kernel instead:  The kernel function implicitly maps the data to the higher- dimensional space (without having to compute Á ( x ) explicitly)! 6 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Recap: Kernels Fulfilling Mercer’s Condition Polynomial kernel Radial Basis Function kernel Hyperbolic tangent kernel (and many, many more…) 7 B. Leibe Slide credit: Bernt Schiele e.g. Sigmoid e.g. Gaussian

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Recap: Nonlinear SVM – Dual Formulation SVM Dual: Maximize under the conditions Classify new data points using 8 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 So Far… We’ve seen already a variety of different classifiers  k-NN  Bayes classifiers  Fisher’s Linear Discriminant  SVMs Each of them has their strengths and weaknesses…  Can we improve performance by combining them? 9 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Topics of This Lecture Ensembles of Classifiers Constructing Ensembles  Cross-validation  Bagging Combining Classifiers  Stacking  Bayesian model averaging  Boosting AdaBoost  Intuition  Algorithm  Analysis  Extensions Applications 10 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Ensembles of Classifiers Intuition  Assume we have K classifiers.  They are independent (i.e. their errors are uncorrelated).  Each of them has an error probability p < 0.5 on training data. –Why can we assume that p won’t be larger than 0.5?  Then a simple majority vote of all classifiers should have a lower error than each individual classifier… 11 B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Ensembles of Classifiers Example  K classifiers with error probability p = 0.3.  Probability that exactly L classifiers make an error:  The probability that 11 or more classifiers make an error is B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Topics of This Lecture Ensembles of Classifiers Constructing Ensembles  Cross-validation  Bagging Combining Classifiers  Stacking  Bayesian Model Averaging  Boosting AdaBoost  Intuition  Algorithm  Analysis  Extensions Applications 13 B. Leibe Methods for obtaining a set of classifiers Methods for combining different classifiers

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Constructing Ensembles How do we get different classifiers?  Simplest case: train same classifier on different data.  But… where shall we get this additional data from? –Recall: training data is very expensive! Idea: Subsample the training data  Reuse the same training algorithm several times on different subsets of the training data. Well-suited for “unstable” learning algorithms  Unstable: small differences in training data can produce very different classifiers –E.g. Decision trees, neural networks, rule learning algorithms,…  Stable learning algorithms –E.g. Nearest neighbor, linear regression, SVMs,… 14 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Constructing Ensembles Cross-Validation  Split the available data into N disjunct subsets.  In each run, train on N-1 subsets for training a classifier.  Estimate the generalization error on the held-out validation set. E.g. 5-fold cross-validation 15 train testtrain testtrain testtrain testtrain test B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Constructing Ensembles Bagging = “Bootstrap aggregation” (Breiman 1996)  In each run of the training algorithm, randomly select M samples from the full set of N training data points.  If M = N, then on average, 63.2% of the training points will be represented. The rest are duplicates. Injecting randomness  Many (iterative) learning algorithms need a random initialization (e.g. k-means, EM)  Perform mutliple runs of the learning algorithm with different random initializations. 16 B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Topics of This Lecture Ensembles of Classifiers Constructing Ensembles  Cross-validation  Bagging Combining Classifiers  Stacking  Bayesian Model Averaging  Boosting AdaBoost  Intuition  Algorithm  Analysis  Extensions Applications 17 B. Leibe Methods for obtaining a set of classifiers Methods for combining different classifiers

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Stacking Idea  Learn L classifiers (based on the training data)  Find a meta-classifier that takes as input the output of the L first-level classifiers. Example  Learn L classifiers with leave-one-out.  Interpret the prediction of the L classifiers as L -dimensional feature vector.  Learn “level-2” classifier based on the examples generated this way. 18 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Recap: Model Combination E.g. Mixture of Gaussians  Several components are combined probabilistically.  Interpretation: different data points can be generated by different components.  We model the uncertainty which mixture component is responsible for generating the corresponding data point:  For iid data, we write the marginal probability of a data set X = { x 1,…, x N } in the form: 19 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Bayesian Model Averaging Model Averaging  Suppose we have H different models h = 1,…, H with prior probabilities p ( h ).  Construct the marginal distribution over the data set Interpretation  Just one model is responsible for generating the entire data set.  The probability distribution over h just reflects our uncertainty which model that is.  As the size of the data set increases, this uncertainty reduces, and p ( X | h ) becomes focused on just one of the models. 20 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Note the Different Interpretations! Model Combination  Different data points generated by different model components.  Uncertainty is about which component created which data point.  One latent variable z n for each data point: Bayesian Model Averaging  The whole data set is generated by a single model.  Uncertainty is about which model was responsible.  One latent variable z for the entire data set: 21 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Model Averaging: Expected Error Combine M predictors y m ( x ) for target output h ( x ).  E.g. each trained on a different bootstrap data set by bagging.  The committee prediction is given by  The output can be written as the true value plus some error.  Thus, the average sum-of-squares error takes the form 22 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Model Averaging: Expected Error Average error of individual models Average error of committee Assumptions  Errors have zero mean:  Errors are uncorrelated: Then: 23 B. Leibe Isn’t this spectacular?

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Model Averaging: Expected Error 24 B. Leibe Average error of committee  This suggests that the average error of a model can be reduced by a factor of M simply by averaging M versions of the model!  Spectacular indeed…  This sounds almost too good to be true… And it is… Can you see where the problem is?  Unfortunately, this result depends on the assumption that the errors are all uncorrelated.  In practice, they will typically be highly correlated.  Still, it can be shown that

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Boosting Simple technique with very interesting properties  Combination of multiple classifiers with the goal to improve classification accuracy.  Can be used with many different types of classifiers.  None of them needs to be too good on its own. –In fact, they only have to be slightly better than chance. –Extreme case: Decision stumps Main idea  Train successive component classifiers on a subset of the training data that is most informative given the current set of classifiers.  Sequential classifier selection 25 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Boosting (Schapire 1989) Algorithm: (3-component classifier) 1.Sample N 1 < N training examples (without replacement) from training set D to get set D 1. –Train weak classifier C 1 on D 1. 2.Sample N 2 < N training examples (without replacement), half of which were misclassified by C 1 to get set D 2. –Train weak classifier C 2 on D 2. 3.Choose all data in D on which C 1 and C 2 disagree to get set D 3. –Train weak classifier C 3 on D 3. 4.Get the final classifier output by majority voting of C 1, C 2, and C B. Leibe Image source: Duda, Hart, Stork, 2001

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Applying Boosting How should we choose the number of samples N 1 ?  Ideally, the number of samples should be roughly equal in all 3 component classifiers.  Reasonable first guess:  However, if the problem is very simple – C 1 will explain most of the data.  N 2 and N 3 will be very small.  Not all of the data will be used effectively.  Similarly, if the problem is extremely hard – C 1 will explain only a small part of the data.  N 2 may be unacceptably large.  In practice, may need to run the boosting procedure a few times and adjust N 1 in order to use the full training set.  Also, we can recursively apply the procedure on C 1 to C B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Discussion: Ensembles of Classifiers Set of simple methods for improving classification  Often effective in practice. Apparent contradiction  We have stressed before that a classifier should be trained on samples from the distribution on which it will be tested.  Resampling seems to violate this recommendation.  Why can a classifier trained on a weighted data distribution do better than one trained on the i.i.d. sample? Explanation  We do not attempt to model the full category distribution here.  Instead, try to find the decision boundary more directly.  Also, increasing number of component classifiers broadens the class of implementable decision functions. 28 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Topics of This Lecture Ensembles of Classifiers Constructing Ensembles  Cross-validation  Bagging Combining Classifiers  Stacking  Bayesian model averaging  Boosting AdaBoost  Intuition  Algorithm  Analysis  Extensions Applications 29 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – “Adaptive Boosting” Main idea [Freund & Schapire, 1996]  Instead of resampling, reweight misclassified training examples. –Increase the chance of being selected in a sampled training set. –Or increase the misclassification cost when training on the full set. Components  h m ( x ) : “weak” or base classifier –Condition: <50% training error over any distribution  H ( x ) : “strong” or final classifier AdaBoost:  Construct a strong classifier as a thresholded linear combination of the weighted weak classifiers: 30 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost: Intuition 31 B. Leibe Consider a 2D feature space with positive and negative examples. Each weak classifier splits the training examples with at least 50% accuracy. Examples misclassified by a previous weak learner are given more emphasis at future rounds. Slide credit: Kristen Grauman Figure adapted from Freund & Schapire

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost: Intuition 32 B. Leibe Slide credit: Kristen Grauman Figure adapted from Freund & Schapire

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost: Intuition 33 B. Leibe Final classifier is combination of the weak classifiers Slide credit: Kristen Grauman Figure adapted from Freund & Schapire

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Formalization 2-class classification problem  Given: training set X = { x 1, …, x N } with target values T = { t 1, …, t N }, t n 2 {-1,1}.  Associated weights W ={ w 1, …, w N } for each training point. Basic steps  In each iteration, AdaBoost trains a new weak classifier h m ( x ) based on the current weighting coefficients W ( m ).  We then adapt the weighting coefficients for each point –Increase w n if x n was misclassified by h m ( x ). –Decrease w n if x n was classified correctly by h m ( x ).  Make predictions using the final combined model 34 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Algorithm 1. Initialization: Set for n = 1,…, N. 2. For m = 1,…, M iterations a)Train a new weak classifier h m ( x ) using the current weighting coefficients W ( m ) by minimizing the weighted error function b)Estimate the weighted error of this classifier on X : c)Calculate a weighting coefficient for h m ( x ) : d)Update the weighting coefficients: 35 B. Leibe How should we do this exactly?

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Historical Development Originally motivated by Statistical Learning Theory  AdaBoost was introduced in 1996 by Freund & Schapire.  It was empirically observed that AdaBoost often tends not to overfit. (Breiman 96, Cortes & Drucker 97, etc.)  As a result, the margin theory (Schapire et al. 98) developed, which is based on loose generalization bounds. –Note: margin for boosting is not the same as margin for SVM. –A bit like retrofitting the theory…  However, those bounds are too loose to be of practical value. Different explanation ( Friedman, Hastie, Tibshirani, 2000)  Interpretation as sequential minimization of an exponential error function (“Forward Stagewise Additive Modeling”).  Explains why boosting works well.  Improvements possible by altering the error function. 36 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Exponential error function  where f m ( x ) is a classifier defined as a linear combination of base classifiers h l ( x ) : Goal  Minimize E with respect to both the weighting coefficients ® l and the parameters of the base classifiers h l ( x ). AdaBoost – Minimizing Exponential Error 37 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Minimizing Exponential Error Sequential Minimization  Suppose that the base classifiers h 1 ( x ),…, h m - 1 ( x ) and their coefficients ® 1,…, ® m - 1 are fixed.  Only minimize with respect to ® m and h m ( x ). 38 B. Leibe with = const.

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Minimizing Exponential Error  Observation: –Correctly classified points: t n h m ( x n ) = +1 –Misclassified points: t n h m ( x n ) =  1  Rewrite the error function as 39 B. Leibe  collect in T m  collect in F m

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Minimizing Exponential Error  Observation: –Correctly classified points: ® m h m ( x n ) = +1 –Misclassified points: ® m h m ( x n ) =  1  Rewrite the error function as 40 B. Leibe  collect in T m  collect in F m

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Minimizing Exponential Error 41 B. Leibe Minimize with respect to h m ( x ) :  This is equivalent to minimizing (our weighted error function from step 2a) of the algorithm)  We’re on the right track. Let’s continue… = const.

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Minimizing Exponential Error 42 B. Leibe Minimize with respect to ® m :  Update for the ® coefficients: weighted error

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Minimizing Exponential Error 43 B. Leibe Remaining step: update the weights  Recall that  Therefore  Update for the weight coefficients. This becomes in the next iteration.

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 1. Initialization: Set for n = 1,…, N. 2. For m = 1,…, M iterations a)Train a new weak classifier h m ( x ) using the current weighting coefficients W ( m ) by minimizing the weighted error function b)Estimate the weighted error of this classifier on X : c)Calculate a weighting coefficient for h m ( x ) : d)Update the weighting coefficients: AdaBoost – Final Algorithm 44 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost – Analysis Result of this derivation  We now know that AdaBoost minimizes an exponential error function in a sequential fashion.  This allows us to analyze AdaBoost’s behavior in more detail.  In particular, we can see how robust it is to outlier data points. 45 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Comparing Error Functions  Ideal misclassification error function (black) –This is what we want to approximate. –Unfortunately, it is not differentiable.  We cannot minimize it by gradient descent. 46 B. Leibe Image source: Bishop, 2006

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Comparing Error Functions  Ideal misclassification error function  “Hinge error” used in SVMs –Zero error for points outside the margin (z>1). –Linearly increasing error for misclassified points (z<1). 47 B. Leibe Image source: Bishop, 2006

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Comparing Error Functions  Ideal misclassification error function  “Hinge error” used in SVMs  Exponential error function –Continuous approximation to ideal misclassification function. –Sequential minimization leads to simple AdaBoost scheme. –Disadvantage: exponential penalty for large negative values!  Less robust to outliers or misclassified data points! 48 B. Leibe Image source: Bishop, 2006

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09  Ideal misclassification error function  “Hinge error” used in SVMs  Exponential error function  “Cross-entropy error” –Similar to exponential error for z>0. –Only grows linearly with large negative values of z.  Make AdaBoost more robust by switching  “GentleBoost” Comparing Error Functions 49 B. Leibe Image source: Bishop, 2006

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Summary: AdaBoost Properties  Simple combination of multiple classifiers.  Easy to implement.  Can be used with many different types of classifiers. –None of them needs to be too good on its own. –In fact, they only have to be slightly better than chance.  Commonly used in many areas.  Empirically good generalization capabilities. Limitations  Original AdaBoost sensitive to misclassified training data points. –Because of exponential error function. –Improvement by GentleBoost  Single-class classifier –Multiclass extensions available 50 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Topics of This Lecture Ensembles of Classifiers Constructing Ensembles  Cross-validation  Bagging Combining Classifiers  Stacking  Bayesian model averaging  Boosting AdaBoost  Intuition  Algorithm  Analysis  Extensions Applications 51 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Example Application: Face Detection Frontal faces are a good example of a class where global appearance models + a sliding window detection approach fit well:  Regular 2D structure  Center of face almost shaped like a “patch”/window Now we’ll take AdaBoost and see how the Viola- Jones face detector works 52 B. Leibe Slide credit: Kristen Grauman

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Feature extraction 53 B. Leibe Feature output is difference between adjacent regions [Viola & Jones, CVPR 2001] Efficiently computable with integral image: any sum can be computed in constant time Avoid scaling images  scale features directly for same cost “Rectangular” filters Value at (x,y) is sum of pixels above and to the left of (x,y) Integral image Slide credit: Kristen Grauman

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Large Library of Filters Considering all possible filter parameters: position, scale, and type: 180,000+ possible features associated with each 24 x 24 window Use AdaBoost both to select the informative features and to form the classifier B. Leibe [Viola & Jones, CVPR 2001] Slide credit: Kristen Grauman 54

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost for Feature+Classifier Selection Want to select the single rectangle feature and threshold that best separates positive (faces) and negative (non- faces) training examples, in terms of weighted error. Outputs of a possible rectangle feature on faces and non-faces. … Resulting weak classifier: For next round, reweight the examples according to errors, choose another filter/threshold combo. B. Leibe [Viola & Jones, CVPR 2001] Slide credit: Kristen Grauman 55

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 AdaBoost for Efficient Feature Selection Image features = weak classifiers For each round of boosting:  Evaluate each rectangle filter on each example  Sort examples by filter values  Select best threshold for each filter (min error) –Sorted list can be quickly scanned for the optimal threshold  Select best filter/threshold combination  Weight on this features is a simple function of error rate  Reweight examples 56 B. Leibe P. Viola, M. Jones, Robust Real-Time Face Detection, IJCV, Vol. 57(2), (first version appeared at CVPR 2001)Robust Real-Time Face Detection Slide credit: Kristen Grauman

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Viola-Jones Face Detector: Results B. Leibe Slide credit: Kristen Grauman 57

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Viola-Jones Face Detector: Results B. Leibe Slide credit: Kristen Grauman 58

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Viola-Jones Face Detector: Results B. Leibe Slide credit: Kristen Grauman 59

Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 References and Further Reading More information on Classifier Combination and Boosting can be found in Chapters of Bishop’s book. A more in-depth discussion of the statistical interpre- tation of AdaBoost is available in the following paper:  J. Friedman, T. Hastie, R. Tibshirani, Additive Logistic Regression: a Statistical View of Boosting, The Annals of Statistics, Vol. 38(2), pages , 2000.Additive Logistic Regression: a Statistical View of Boosting B. Leibe 60 Christopher M. Bishop Pattern Recognition and Machine Learning Springer, 2006