Machine Learning Overview Tamara Berg Language and Vision.

Slides:



Advertisements
Similar presentations
Semi-Supervised Learning Avrim Blum Carnegie Mellon University [USC CS Distinguished Lecture Series, 2008]
Advertisements

Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Data Mining Classification: Alternative Techniques
Data Mining Classification: Alternative Techniques
An Introduction of Support Vector Machine
Machine learning continued Image source:
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Supervised Learning Recap
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,
Classification and Decision Boundaries
Maria-Florina Balcan Modern Topics in Learning Theory Maria-Florina Balcan 04/19/2006.
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Decision making in episodic environments
Co-Training and Expansion: Towards Bridging Theory and Practice Maria-Florina Balcan, Avrim Blum, Ke Yang Carnegie Mellon University, Computer Science.
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
1 How to be a Bayesian without believing Yoav Freund Joint work with Rob Schapire and Yishay Mansour.
Co-training LING 572 Fei Xia 02/21/06. Overview Proposed by Blum and Mitchell (1998) Important work: –(Nigam and Ghani, 2000) –(Goldman and Zhou, 2000)
Semi-Supervised Clustering Jieping Ye Department of Computer Science and Engineering Arizona State University
Maria-Florina Balcan A Theoretical Model for Learning from Labeled and Unlabeled Data Maria-Florina Balcan & Avrim Blum Carnegie Mellon University, Computer.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Machine learning Image source:
Crash Course on Machine Learning
Machine learning Image source:
Advanced Multimedia Text Classification Tamara Berg.
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Final review LING572 Fei Xia Week 10: 03/11/
Tamara Berg Machine Learning Recognizing People, Objects, & Actions 1.
CSE 185 Introduction to Computer Vision Pattern Recognition.
Machine Learning Overview Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CS 231A Section 1: Linear Algebra & Probability Review
Data mining and machine learning A brief introduction.
Classification Tamara Berg CSE 595 Words & Pictures.
Classification 2: discriminative models
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
Inductive learning Simplest form: learn a function from examples
ADVANCED CLASSIFICATION TECHNIQUES David Kauchak CS 159 – Fall 2014.
Part 3: discriminative methods Antonio Torralba. Overview of section Object detection with classifiers Boosting –Gentle boosting –Weak detectors –Object.
Computer Vision CS 776 Spring 2014 Recognition Machine Learning Prof. Alex Berg.
Recognition using Boosting Modified from various sources including
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Reinforcement Learning Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
CHAPTER 18 SECTION 1 – 3 Learning from Observations.
BOOSTING David Kauchak CS451 – Fall Admin Final project.
1 COMP3503 Semi-Supervised Learning COMP3503 Semi-Supervised Learning Daniel L. Silver.
Today Ensemble Methods. Recap of the course. Classifier Fusion
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
Lecture 6: Classification – Boosting and SVMs CAP 5415 Fall 2006.
Methods for classification and image representation
CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 Machine Learning Margaret H. Dunham Department of Computer Science and Engineering Southern.
An Introduction to Support Vector Machine (SVM)
Machine Learning Overview Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Classification II Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Machine learning Image source:
Machine Learning Overview Tamara Berg Recognizing People, Objects, and Actions.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 9: Review.
Chapter 18 Section 1 – 3 Learning from Observations.
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
Learning From Observations Inductive Learning Decision Trees Ensembles.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Machine learning Image source:
Semi-Supervised Clustering
Combining Labeled and Unlabeled Data with Co-Training
Recognition using Nearest Neighbor (or kNN)
Decision making in episodic environments
COSC 4335: Other Classification Techniques
Presentation transcript:

Machine Learning Overview Tamara Berg Language and Vision

Reminders HW1 – due Feb 16 Discussion leaders for Feb 17/24 should schedule a meeting with me soon

Types of ML algorithms Unsupervised – Algorithms operate on unlabeled examples Supervised – Algorithms operate on labeled examples Semi/Partially-supervised – Algorithms combine both labeled and unlabeled examples Slide 3 of 113

Unsupervised Learning Slide 4 of 113

Slide 5 of 113

K-means clustering Want to minimize sum of squared Euclidean distances between points x i and their nearest cluster centers m k Algorithm: Randomly initialize K cluster centers Iterate until convergence: Assign each data point to the nearest center Recompute each cluster center as the mean of all points assigned to it source: Svetlana Lazebnik Slide 6 of 113

Supervised Learning Slide 7 of 113

Slide from Dan Klein Slide 8 of 113

Slide from Dan Klein Slide 9 of 113

Slide from Dan Klein Slide 10 of 113

Slide from Dan Klein Slide 11 of 113

Example: Image classification apple pear tomato cow dog horse inputdesired output Slide credit: Svetlana Lazebnik Slide 12 of 113

Slide from Dan Klein Slide 13 of 113

Example: Seismic data Body wave magnitude Surface wave magnitude Nuclear explosions Earthquakes Slide credit: Svetlana Lazebnik Slide 14 of 113

Slide from Dan Klein Slide 15 of 113

The basic classification framework y = f(x) Learning: given a training set of labeled examples {(x 1,y 1 ), …, (x N,y N )}, estimate the parameters of the prediction function f Inference: apply f to a never before seen test example x and output the predicted value y = f(x) outputclassification function input Slide credit: Svetlana Lazebnik Slide 16 of 113

Some ML classification methods 10 6 examples Nearest neighbor Shakhnarovich, Viola, Darrell 2003 Berg, Berg, Malik 2005 … Neural networks LeCun, Bottou, Bengio, Haffner 1998 Rowley, Baluja, Kanade 1998 … Support Vector Machines and Kernels Conditional Random Fields McCallum, Freitag, Pereira 2000 Kumar, Hebert 2003 … Guyon, Vapnik Heisele, Serre, Poggio, 2001 … Slide credit: Antonio Torralba 17

Example: Training and testing Key challenge: generalization to unseen examples Training set (labels known)Test set (labels unknown) Slide credit: Svetlana Lazebnik Slide 18 of 113

Slide credit: Dan Klein Slide 19 of 113

Slide from Min-Yen Kan Classification by Nearest Neighbor Word vector document classification – here the vector space is illustrated as having 2 dimensions. How many dimensions would the data actually live in? Slide 20 of 113

Slide from Min-Yen Kan Classification by Nearest Neighbor Slide 21 of 113

Classification by Nearest Neighbor Classify the test document as the class of the document “nearest” to the query document (use vector similarity to find most similar doc) Slide from Min-Yen Kan Slide 22 of 113

Classification by kNN Classify the test document as the majority class of the k documents “nearest” to the query document. Slide from Min-Yen Kan Slide 23 of 113

Slide from Min-Yen Kan What are the features? What’s the training data? Testing data? Parameters? Classification by kNN Slide 24 of 113

Slide from Min-Yen Kan Slide 25 of 113

Slide from Min-Yen Kan Slide 26 of 113

Slide from Min-Yen Kan Slide 27 of 113

Slide from Min-Yen Kan Slide 28 of 113

Slide from Min-Yen Kan Slide 29 of 113

Slide from Min-Yen Kan What are the features? What’s the training data? Testing data? Parameters? Classification by kNN Slide 30 of 113

NN for vision 31 Fast Pose Estimation with Parameter Sensitive Hashing Shakhnarovich, Viola, Darrell

J. Hays and A. Efros, IM2GPS: estimating geographic information from a single image, CVPR 2008 NN for vision

Decision tree classifier Example problem: decide whether to wait for a table at a restaurant, based on the following attributes: 1.Alternate: is there an alternative restaurant nearby? 2.Bar: is there a comfortable bar area to wait in? 3.Fri/Sat: is today Friday or Saturday? 4.Hungry: are we hungry? 5.Patrons: number of people in the restaurant (None, Some, Full) 6.Price: price range ($, $$, $$$) 7.Raining: is it raining outside? 8.Reservation: have we made a reservation? 9.Type: kind of restaurant (French, Italian, Thai, Burger) 10.WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60) Slide credit: Svetlana Lazebnik Slide 33 of 113

Decision tree classifier Slide credit: Svetlana Lazebnik Slide 34 of 113

Decision tree classifier Slide credit: Svetlana Lazebnik Slide 35 of 113

Linear classifier Find a linear function to separate the classes f(x) = sgn(w 1 x 1 + w 2 x 2 + … + w D x D ) = sgn(w  x) Slide credit: Svetlana Lazebnik Slide 36 of 113

Discriminant Function It can be arbitrary functions of x, such as: Nearest Neighbor Decision Tree Linear Functions Slide credit: Jinwei Gu Slide 37 of 113

Linear Discriminant Function g(x) is a linear function: x1x1 x2x2 w T x + b = 0 w T x + b < 0 w T x + b > 0 A hyper-plane in the feature space Slide credit: Jinwei Gu denotes +1 denotes -1 x1x1 Slide 38 of 113

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers! Slide credit: Jinwei Gu Slide 39 of 113

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 40 of 113

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function x1x1 x2x2 Infinite number of answers! denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 41 of 113

x1x1 x2x2 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function Infinite number of answers! Which one is the best? denotes +1 denotes -1 Slide credit: Jinwei Gu Slide 42 of 113

Large Margin Linear Classifier “safe zone” The linear discriminant function (classifier) with the maximum margin is the best Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best?  strong generalization ability Margin x1x1 x2x2 Linear SVM Slide credit: Jinwei Gu Slide 43 of 113

Large Margin Linear Classifier x1x1 x2x2 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- Support Vectors Slide credit: Jinwei Gu Slide 44 of 113

A simple algorithm for learning robust classifiers – Freund & Shapire, 1995 – Friedman, Hastie, Tibshhirani, 1998 Provides efficient algorithm for sparse visual feature selection – Tieu & Viola, 2000 – Viola & Jones, 2003 Easy to implement, doesn’t require external optimization tools. Boosting Slide credit: Antonio Torralba Slide 45 of 113

Defines a classifier using an additive model: Boosting Strong classifier Weak classifier Weight Features vector Slide credit: Antonio Torralba Slide 46 of 113

Defines a classifier using an additive model: We need to define a family of weak classifiers Boosting Strong classifier Weak classifier Weight Features vector from a family of weak classifiers Slide credit: Antonio Torralba Slide 47 of 113

Adaboost Slide credit: Antonio Torralba Slide 48 of 113

Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Boosting It is a sequential procedure: x t=1 x t=2 xtxt Slide credit: Antonio Torralba Slide 49 of 113

Toy example Weak learners from the family of lines h => p(error) = 0.5 it is at chance Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 50 of 113

Toy example This one seems to be the best Each data point has a class label: w t =1 and a weight: +1 ( ) -1 ( ) y t = This is a ‘weak classifier’: It performs slightly better than chance. Slide credit: Antonio Torralba Slide 51 of 113

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 52 of 113

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 53 of 113

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 54 of 113

Toy example Each data point has a class label: w t w t exp{-y t H t } We update the weights: +1 ( ) -1 ( ) y t = Slide credit: Antonio Torralba Slide 55 of 113

Toy example The strong (non- linear) classifier is built as the combination of all the weak (linear) classifiers. f1f1 f2f2 f3f3 f4f4 Slide credit: Antonio Torralba Slide 56 of 113

Adaboost Slide credit: Antonio Torralba Slide 57 of 113

Semi-Supervised Learning Slide 58 of 113

Supervised learning has many successes recognize speech, steer a car, classify documents classify proteins recognizing faces, objects in images... Slide Credit: Avrim Blum Slide 59 of 113

However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 60

However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Speech Images Medical outcomes Customer modeling Protein sequences Web pages Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 61

However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. [From Jerry Zhu] Need to pay someone to do it, requires special testing,… Slide Credit: Avrim Blum 62

Need to pay someone to do it, requires special testing,… However, for many problems, labeled data can be rare or expensive. Unlabeled data is much cheaper. Can we make use of cheap unlabeled data? Slide Credit: Avrim Blum 63

Semi-Supervised Learning Can we use unlabeled data to augment a small labeled sample to improve learning? But unlabeled data is missing the most important info!! But maybe still has useful regularities that we can use. But… Slide Credit: Avrim Blum Slide 64 of 113

Method 1: EM 65

How to use unlabeled data One way is to use the EM algorithm – EM: Expectation Maximization The EM algorithm is a popular iterative algorithm for maximum likelihood estimation in problems with missing data. The EM algorithm consists of two steps, – Expectation step, i.e., filling in the missing data – Maximization step – calculate a new maximum a posteriori estimate for the parameters. Slide 66 of 113

Example Algorithm 1.Train a classifier with only the labeled documents. 2.Use it to probabilistically classify the unlabeled documents. 3.Use ALL the documents to train a new classifier. 4.Iterate steps 2 and 3 to convergence. Slide 67 of 113

Method 2: Co-Training 68

Co-training [Blum&Mitchell ’ 98] Many problems have two different sources of info (“features/views”) you can use to determine label. E.g., classifying faculty webpages: can use words on page or words on links pointing to the page. My AdvisorProf. Avrim BlumMy AdvisorProf. Avrim Blum x 2 - Text info x 1 - Link info x - Link info & Text info Slide Credit: Avrim Blum Slide 69 of 113

Co-training Idea: Use small labeled sample to learn initial rules. – E.g., “ my advisor ” pointing to a page is a good indicator it is a faculty home page. – E.g., “ I am teaching ” on a page is a good indicator it is a faculty home page. my advisor Slide Credit: Avrim Blum Slide 70 of 113

Co-training Idea: Use small labeled sample to learn initial rules. – E.g., “ my advisor ” pointing to a page is a good indicator it is a faculty home page. – E.g., “ I am teaching ” on a page is a good indicator it is a faculty home page. Then look for unlabeled examples where one view is confident and the other is not. Have it label the example for the other. Training 2 classifiers, one on each type of info. Using each to help train the other. h x 1,x 2 i Slide Credit: Avrim Blum Slide 71 of 113

Co-training vs. EM Co-training splits features, EM does not. Co-training incrementally uses the unlabeled data. EM probabilistically labels all the data at each round; EM iteratively uses the unlabeled data. Slide 72 of 113

Generative vs Discriminative Discriminative version – build a classifier to discriminate between monkeys and non-monkeys. P(monkey|image)

Generative version - build a model of the joint distribution. P(image,monkey) Generative vs Discriminative

Can use Bayes rule to compute p(monkey|image) if we know p(image,monkey)

Generative vs Discriminative Can use Bayes rule to compute p(monkey|image) if we know p(image,monkey) Discriminative Generative