An Introduction of Support Vector Machine

Slides:



Advertisements
Similar presentations
Introduction to Support Vector Machines (SVM)
Advertisements

Support Vector Machines
Lecture 9 Support Vector Machines
ECG Signal processing (2)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machine & Its Applications Abhishek Sharma Dept. of EEE BIT Mesra Aug 16, 2010 Course: Neural Network Professor: Dr. B.M. Karan Semester.
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
An Introduction of Support Vector Machine
CHAPTER 10: Linear Discrimination
Support Vector Machines
SVM—Support Vector Machines
Support vector machine
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Machine learning continued Image source:
LOGO Classification IV Lecturer: Dr. Bo Yuan
Groundwater 3D Geological Modeling: Solving as Classification Problem with Support Vector Machine A. Smirnoff, E. Boisvert, S. J.Paradis Earth Sciences.
LPP-HOG: A New Local Image Descriptor for Fast Human Detection Andy Qing Jun Wang and Ru Bo Zhang IEEE International Symposium.
Discriminative and generative methods for bags of features
1 Machine Learning Support Vector Machines. 2 Perceptron Revisited: Linear Separators Binary classification can be viewed as the task of separating classes.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class.
Support Vector Machines Kernel Machines
Support Vector Machines
CS 4700: Foundations of Artificial Intelligence
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
Lecture 10: Support Vector Machines
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
An Introduction to Support Vector Machines Martin Law.
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Ch. Eick: Support Vector Machines: The Main Ideas Reading Material Support Vector Machines: 1.Textbook 2. First 3 columns of Smola/Schönkopf article on.
Classification Tamara Berg CSE 595 Words & Pictures.
CSE 4705 Artificial Intelligence
Support Vector Machine & Image Classification Applications
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
An Introduction to Support Vector Machines (M. Law)
CS 478 – Tools for Machine Learning and Data Mining SVM.
An Introduction to Support Vector Machine (SVM)
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
1 Support Vector Machines Podpůrné vektorové stroje Babak Mahdian, June 2009.
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
An Introduction of Support Vector Machine In part from of Jinwei Gu.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Roughly overview of Support vector machines Reference: 1.Support vector machines and machine learning on documents. Christopher D. Manning, Prabhakar Raghavan.
A Brief Introduction to Support Vector Machine (SVM) Most slides were from Prof. A. W. Moore, School of Computer Science, Carnegie Mellon University.
Kernels Slides from Andrew Moore and Mingyue Tan.
An Introduction of Support Vector Machine Courtesy of Jinwei Gu.
Support Vector Machine & Its Applications. Overview Intro. to Support Vector Machines (SVM) Properties of SVM Applications  Gene Expression Data Classification.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
PREDICT 422: Practical Machine Learning
Support Vector Machine
Support Vector Machines
Support Vector Machines Introduction to Data Mining, 2nd Edition by
CS 2750: Machine Learning Support Vector Machines
Support Vector Machine
SVMs for Document Ranking
Presentation transcript:

An Introduction of Support Vector Machine Jinwei Gu 2008/10/16

Review: What We’ve Learned So Far Bayesian Decision Theory Maximum-Likelihood & Bayesian Parameter Estimation Nonparametric Density Estimation Parzen-Window, kn-Nearest-Neighbor K-Nearest Neighbor Classifier Decision Tree Classifier

Today: Support Vector Machine (SVM) A classifier derived from statistical learning theory by Vapnik, et al. in 1992 SVM became famous when, using images as input, it gave accuracy comparable to neural-network with hand-designed features in a handwriting recognition task Currently, SVM is widely used in object detection & recognition, content-based image retrieval, text recognition, biometrics, speech recognition, etc. Also used for regression (will not cover today) Chapter 5.1, 5.2, 5.3, 5.11 (5.4*) in textbook V. Vapnik

Outline Linear Discriminant Function Large Margin Linear Classifier Nonlinear SVM: The Kernel Trick Demo of SVM

Discriminant Function Chapter 2.4: the classifier is said to assign a feature vector x to class wi if For two-category case, An example we’ve learned before: Minimum-Error-Rate Classifier

Discriminant Function It can be arbitrary functions of x, such as: Nearest Neighbor Decision Tree Linear Functions Nonlinear Functions

Linear Discriminant Function g(x) is a linear function: x2 wT x + b > 0 A hyper-plane in the feature space wT x + b = 0 n (Unit-length) normal vector of the hyper-plane: wT x + b < 0 x1

Linear Discriminant Function denotes +1 denotes -1 How would you classify these points using a linear discriminant function in order to minimize the error rate? x2 Infinite number of answers! x1

Linear Discriminant Function denotes +1 denotes -1 How would you classify these points using a linear discriminant function in order to minimize the error rate? x2 Infinite number of answers! x1

Linear Discriminant Function denotes +1 denotes -1 How would you classify these points using a linear discriminant function in order to minimize the error rate? x2 Infinite number of answers! x1

Linear Discriminant Function denotes +1 denotes -1 How would you classify these points using a linear discriminant function in order to minimize the error rate? x2 Infinite number of answers! Which one is the best? x1

Large Margin Linear Classifier denotes +1 denotes -1 The linear discriminant function (classifier) with the maximum margin is the best x2 Margin “safe zone” Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best? Robust to outliners and thus strong generalization ability x1

Large Margin Linear Classifier denotes +1 denotes -1 Given a set of data points: x2 , where With a scale transformation on both w and b, the above is equivalent to x1

Large Margin Linear Classifier denotes +1 denotes -1 We know that x2 Margin x+ x- wT x + b = 1 Support Vectors wT x + b = 0 wT x + b = -1 The margin width is: n x1

Large Margin Linear Classifier denotes +1 denotes -1 Formulation: x2 Margin x+ x- wT x + b = 1 such that wT x + b = 0 wT x + b = -1 n x1

Large Margin Linear Classifier denotes +1 denotes -1 Formulation: x2 Margin x+ x- wT x + b = 1 such that wT x + b = 0 wT x + b = -1 n x1

Large Margin Linear Classifier denotes +1 denotes -1 Formulation: x2 Margin x+ x- wT x + b = 1 such that wT x + b = 0 wT x + b = -1 n x1

Solving the Optimization Problem s.t. Quadratic programming with linear constraints s.t. Lagrangian Function

Solving the Optimization Problem s.t.

Solving the Optimization Problem s.t. Lagrangian Dual Problem s.t. , and

Solving the Optimization Problem From KKT condition, we know: x1 x2 wT x + b = 0 wT x + b = -1 wT x + b = 1 x+ x- Support Vectors Thus, only support vectors have The solution has the form:

Solving the Optimization Problem The linear discriminant function is: Notice it relies on a dot product between the test point x and the support vectors xi Also keep in mind that solving the optimization problem involved computing the dot products xiTxj between all pairs of training points

Large Margin Linear Classifier denotes +1 denotes -1 What if data is not linear separable? (noisy data, outliers, etc.) x2 wT x + b = 0 wT x + b = -1 wT x + b = 1 Slack variables ξi can be added to allow mis-classification of difficult or noisy data points x1

Large Margin Linear Classifier Formulation: such that Parameter C can be viewed as a way to control over-fitting.

Large Margin Linear Classifier Formulation: (Lagrangian Dual Problem) such that

Non-linear SVMs Datasets that are linearly separable with noise work out great: x x But what are we going to do if the dataset is just too hard? How about… mapping data to a higher-dimensional space: x x2 This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Non-linear SVMs: Feature Space General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function is now: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space:

Nonlinear SVMs: The Kernel Trick An example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj) = φ(xi) Tφ(xj): K(xi,xj)=(1 + xiTxj)2, = 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2 = [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2] = φ(xi) Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2] This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Nonlinear SVMs: The Kernel Trick Examples of commonly-used kernel functions: Linear kernel: Polynomial kernel: Gaussian (Radial-Basis Function (RBF) ) kernel: Sigmoid: In general, functions that satisfy Mercer’s condition can be kernel functions.

Nonlinear SVM: Optimization Formulation: (Lagrangian Dual Problem) such that The solution of the discriminant function is The optimization technique is the same.

Support Vector Machine: Algorithm 1. Choose a kernel function 2. Choose a value for C 3. Solve the quadratic programming problem (many software packages available) 4. Construct the discriminant function from the support vectors

Some Issues Choice of kernel Choice of kernel parameters - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters are tested This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Summary: Support Vector Machine 1. Large Margin Classifier Better generalization ability & less over-fitting 2. The Kernel Trick Map data points to higher dimensional space in order to make them linearly separable. Since only dot product is used, we do not need to represent the mapping explicitly.

Additional Resource http://www.kernel-machines.org/

Demo of LibSVM http://www.csie.ntu.edu.tw/~cjlin/libsvm/