Presentation is loading. Please wait.

Presentation is loading. Please wait.

Binary Classification Problem Learn a Classifier from the Training Set

Similar presentations


Presentation on theme: "Binary Classification Problem Learn a Classifier from the Training Set"— Presentation transcript:

1 Binary Classification Problem Learn a Classifier from the Training Set
Given a training dataset Main goal: Predict the unseen class label for new data Find a function by learning from data The simplest function is linear:

2 Binary Classification Problem Linearly Separable Case
Benign Malignant

3 Support Vector Machines Maximizing the Margin between Bounding Planes

4 Why We Maximize the Margin? (Based on Statistical Learning Theory)
The Structural Risk Minimization (SRM): The expected risk will be less than or equal to empirical risk (training error)+ VC (error) bound

5 Summary the Notations Let be a
training dataset and represented by matrices equivalent to , where

6 Support Vector Classification
(Linearly Separable Case, Primal) The hyperplane is determined by solving the minimization problem: It realizes the maximal margin hyperplane with geometric margin

7 Support Vector Classification
(Linearly Separable Case, Dual Form) The dual problem of previous MP: subject to Applying the KKT optimality conditions, we have . But where is Don’t forget

8 Dual Representation of SVM
(Key of Kernel Methods: ) The hypothesis is determined by

9 Soft Margin SVM (Nonseparable Case) If data are not linearly separable
Primal problem is infeasible Dual problem is unbounded above Introduce the slack variable for each training point The inequality system is always feasible e.g.

10

11 Robust Linear Programming
Preliminary Approach to SVM s.t. (LP) where : nonnegative slack (error) vector The term , 1-norm measure of error vector, is called the training error. For the linearly separable case, at solution of (LP):

12 Support Vector Machine Formulations
(Two Different Measures of Training Error) 2-Norm Soft Margin: 1-Norm Soft Margin (Conventional SVM):

13 Tuning Procedure How to determine C?
overfitting C The final value of parameter is one with the maximum testing set correctness !

14 Lagrangian Dual Problem
subject to subject to where

15 1-Norm Soft Margin SVM Dual Formulation
The Lagrangian for 1-norm soft margin: where The partial derivatives with respect to primal variables equal zeros

16 Substitute: in where and

17 Dual Maximization Problem
for 1-Norm Soft Margin Dual: The corresponding KKT complementarity:

18 Slack Variables for 1-Norm Soft Margin SVM
Non-zero slack can only occur when The contribution of outlier in the decision rule will be at most The trade-off between accuracy and regularization directly controls by C The points for which lie at the bounding planes This will help us to find

19 Two-spiral Dataset (94 White Dots & 94 Red Dots)

20 Learning in Feature Space
(Could Simplify the Classification Task) Learning in a high dimensional space could degrade generalization performance This phenomenon is called curse of dimensionality By using a kernel function, that represents the inner product of training example in feature space, we never need to explicitly know the nonlinear map. Even do not know the dimensionality of feature space There is no free lunch Deal with a huge and dense kernel matrix Reduced kernel can avoid this difficulty

21

22 Linear Machine in Feature Space
Let be a nonlinear map from the input space to some feature space The classifier will be in the form (Primal): Make it in the dual form:

23 Kernel: Represent Inner Product
in Feature Space Definition: A kernel is a function such that where The classifier will become:

24 A Simple Example of Kernel
Polynomial Kernel of Degree 2: and the nonlinear map Let defined by . Then . There are many other nonlinear maps, , that satisfy the relation:

25 Power of the Kernel Technique
Consider a nonlinear map that consists of distinct features of all the monomials of degree d. Then . For example: Is it necessary? We only need to know ! This can be achieved

26 Kernel Technique Based on Mercer’s Condition (1909)
The value of kernel function represents the inner product of two training points in feature space Kernel functions merge two steps 1. map input data from input space to feature space (might be infinite dim.) 2. do inner product in the feature space

27 More Examples of Kernel
is an integer: Polynomial Kernel : ) (Linear Kernel Gaussian (Radial Basis) Kernel : The -entry of represents the “similarity” of data points and

28 Nonlinear 1-Norm Soft Margin SVM
In Dual Form Linear SVM: Nonlinear SVM:

29 1-norm Support Vector Machines Good for Feature Selection
Solve the quadratic program for some : min s. t. , , denotes where or membership. Equivalent to solve a Linear Program as follows:

30 SVM as an Unconstrained Minimization Problem
(QP) At the solution of (QP): where Hence (QP) is equivalent to the nonsmooth SVM: min Change (QP) into an unconstrained MP Reduce (n+1+m) variables to (n+1) variables

31 Smooth the Plus Function: Integrate
Step function: Sigmoid function: p-function: Plus function:

32 SSVM: Smooth Support Vector Machine
Replacing the plus function in the nonsmooth SVM by the smooth , gives our SSVM: min nonsmooth SVM as goes to infinity. The solution of SSVM converges to the solution of (Typically, )

33 Newton-Armijo Method: Quadratic Approximation of SSVM
generated by solving a The sequence quadratic approximation of SSVM, converges to the of SSVM at a quadratic rate. unique solution Converges in 6 to 8 iterations At each iteration we solve a linear system of: n+1 equations in n+1 variables Complexity depends on dimension of input space It might be needed to select a stepsize

34 Newton-Armijo Algorithm
Start with any . Having stop if else : (i) Newton Direction : globally and quadratically converge to unique solution in a finite number of steps (ii) Armijo Stepsize : such that Armijo’s rule is satisfied

35 Nonlinear Smooth SVM Nonlinear Classifier: by a nonlinear kernel : min
Replace by a nonlinear kernel : min Use Newton-Armijo algorithm to solve the problem Each iteration solves m+1 linear equations in m+1 variables Nonlinear classifier depends on the data points with nonzero coefficients :

36 Conclusion An overview of SVMs for classification
SSVM: A new formulation of support vector machine as a smooth unconstrained minimization problem Can be solved by a fast Newton-Armijo algorithm No optimization (LP, QP) package is needed There are many important issues did not address this lecture such as: How to solve conventional SVM? How to select parameters: How to deal with massive datasets?

37 { Perceptron  . i=0n wi xi g 1 if i=0n wi xi >0 o(xi) =
Linear threshold unit (LTU) x0=1 x1 w1 w0 w2 x2 . i=0n wi xi g wn xn 1 if i=0n wi xi >0 o(xi) = -1 otherwise {

38 Possibilities for function g
Sign function Step function Sigmoid (logistic) function sign(x) = +1, if x > 0 -1, if x  0 step(x) = 1, if x > threshold 0, if x  threshold (in picture above, threshold = 0) sigmoid(x) = 1/(1+e-x) Adding an extra input with activation x0 = 1 and weight wi, 0 = -T (called the bias weight) is equivalent to having a threshold at T. This way we can always assume a 0 threshold.

39 Using a Bias Weight to Standardize the Threshold
1 -T w1 x1 w2 x2 w1x1+ w2x2 < T w1x1+ w2x2 - T < 0

40 Perceptron Learning Rule
(x, t)=([2,1], -1) o =sgn( ) =1 x2 x2 w = [0.25 – ] x2 = 0.2 x1 – 0.5 o=-1 w = [0.2 –0.2 –0.2] (x, t)=([-1,-1], 1) o = sgn( ) =-1 x1 x1 (x, t)=([1,1], 1) o = sgn( ) = -1 -0.5x1+0.3x2+0.45>0  o = 1 w = [ ] w = [-0.2 –0.4 –0.2] x2 x2 x1 x1

41 The Perceptron Algorithm Rosenblatt, 1956
Given a linearly separable training set and learning rate and the initial weight vector, bias: and let

42 The Perceptron Algorithm (Primal Form)
Repeat: until no mistakes made within the for loop return: . What is ?

43

44 The Perceptron Algorithm
( STOP in Finite Steps ) Theorem (Novikoff) Let be a non-trivial training set, and let Suppose that there exists a vector and . Then the number of mistakes made by the on-line perceptron algorithm on is at most

45 Proof of Finite Termination
Proof: Let The algorithm starts with an augmented weight vector and updates it at each mistake. Let be the augmented weight vector prior to the th mistake. The th update is performed when where is the point incorrectly classified by

46 Update Rule of Perceotron
Similarly,

47 Update Rule of Perceotron

48 The Perceptron Algorithm (Dual Form)
Given a linearly separable training set and Repeat: until no mistakes made within the for loop return:

49 What We Got in the Dual Form Perceptron Algorithm?
The number of updates equals: implies that the training point has been misclassified in the training process at least once. implies that removing the training point will not affect the final results The training data only appear in the algorithm through the entries of the Gram matrix, which is defined below:


Download ppt "Binary Classification Problem Learn a Classifier from the Training Set"

Similar presentations


Ads by Google