Presentation is loading. Please wait.

Presentation is loading. Please wait.

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 5 Linear Discriminant Functions 04.11.2013 Bastian Leibe.

Similar presentations


Presentation on theme: "Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 5 Linear Discriminant Functions 04.11.2013 Bastian Leibe."— Presentation transcript:

1 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 5 Linear Discriminant Functions 04.11.2013 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A AA A A A A A AAAAAA A AAAA A A A AA A A Many slides adapted from B. Schiele

2 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Announcements L2P is working  Yay! 2 B. Leibe

3 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Course Outline Fundamentals (2 weeks)  Bayes Decision Theory  Probability Density Estimation Discriminative Approaches (5 weeks)  Linear Discriminant Functions  Support Vector Machines  Ensemble Methods & Boosting  Randomized Trees, Forests & Ferns Generative Models (4 weeks)  Bayesian Networks  Markov Random Fields B. Leibe 3

4 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Mixture of Gaussians (MoG) “Generative model” 4 B. Leibe 1 2 3 “Weight” of mixture component Mixture component Mixture density Slide credit: Bernt Schiele

5 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Estimating MoGs – Iterative Strategy Assuming we knew the values of the hidden variable… 5 B. Leibe 1 111 00 0 0 0 000 11 1 1 1 111 22 2 2 j ML for Gaussian #1ML for Gaussian #2 assumed known Slide credit: Bernt Schiele

6 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Estimating MoGs – Iterative Strategy Assuming we knew the mixture components… Bayes decision rule: Decide j = 1 if 6 B. Leibe assumed known 1 111 22 2 2 j Slide credit: Bernt Schiele

7 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Iterative procedure 1.Initialization: pick K arbitrary centroids (cluster means) 2.Assign each sample to the closest centroid. 3.Adjust the centroids to be the means of the samples assigned to them. 4.Go to step 2 (until no change) Algorithm is guaranteed to converge after finite #iterations.  Local optimum  Final result depends on initialization. Recap: K-Means Clustering 7 B. Leibe Slide credit: Bernt Schiele

8 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: EM Algorithm Expectation-Maximization (EM) Algorithm  E-Step: softly assign samples to mixture components  M-Step: re-estimate the parameters (separately for each mixture component) based on the soft assignments 8 B. Leibe = soft number of samples labeled j Slide adapted from Bernt Schiele

9 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 EM – Technical Advice When implementing EM, we need to take care to avoid singularities in the estimation!  Mixture components may collapse on single data points.  Need to introduce regularization  Enforce minimum width for the Gaussians  E.g., by enforcing that none of its eigenvalues gets too small.  if ¸ i < ² then ¸ i = ² 9 B. Leibe Image source: C.M. Bishop, 2006

10 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Topics of This Lecture Linear discriminant functions  Definition  Extension to multiple classes Least-squares classification  Derivation  Shortcomings Generalized linear models  Connection to neural networks  Generalized linear discriminants & gradient descent 10 B. Leibe

11 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Discriminant Functions Bayesian Decision Theory  Model conditional probability densities and priors  Compute posteriors (using Bayes’ rule)  Minimize probability of misclassification by maximizing. New approach  Directly encode decision boundary  Without explicit modeling of probability densities  Minimize misclassification probability directly. 11 B. Leibe Slide credit: Bernt Schiele

12 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Discriminant Functions Formulate classification in terms of comparisons  Discriminant functions  Classify x as class C k if Examples (Bayes Decision Theory) 12 B. Leibe Slide credit: Bernt Schiele

13 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Discriminant Functions Example: 2 classes Decision functions (from Bayes Decision Theory) 13 B. Leibe Slide credit: Bernt Schiele

14 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Learning Discriminant Functions General classification problem  Goal: take a new input x and assign it to one of K classes C k.  Given: training set X = { x 1, …, x N } with target values T = { t 1, …, t N }.  Learn a discriminant function y ( x ) to perform the classification. 2-class problem  Binary target values: K-class problem  1-of-K coding scheme, e.g. 14 B. Leibe

15 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Linear Discriminant Functions 2-class problem  y ( x ) > 0 : Decide for class C 1, else for class C 2 In the following, we focus on linear discriminant functions  If a data set can be perfectly classified by a linear discriminant, then we call it linearly separable. 15 B. Leibe weight vector“bias” (= threshold) Slide credit: Bernt Schiele

16 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Linear Discriminant Functions Decision boundary defines a hyperplane  Normal vector:  Offset: 16 B. Leibe Slide credit: Bernt Schiele

17 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Linear Discriminant Functions Notation  D : Number of dimensions 17 B. Leibe withconstant Slide credit: Bernt Schiele

18 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Extension to Multiple Classes Two simple strategies  How many classifiers do we need in both cases?  What difficulties do you see for those strategies? 18 B. Leibe One-vs-all classifiersOne-vs-one classifiers Image source: C.M. Bishop, 2006

19 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Extension to Multiple Classes Problem  Both strategies result in regions for which the pure classification result ( y k > 0) is ambiguous.  In the one-vs-all case, it is still possible to classify those inputs based on the continuous classifier outputs y k > y j 8 j  k. Solution  We can avoid those difficulties by taking K linear functions of the form and defining the decision boundaries directly by deciding for C k iff y k > y j 8 j  k.  This corresponds to a 1-of-K coding scheme 19 B. Leibe Image source: C.M. Bishop, 2006

20 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Extension to Multiple Classes K-class discriminant  Combination of K linear functions  Resulting decision hyperplanes:  It can be shown that the decision regions of such a discriminant are always singly connected and convex.  This makes linear discriminant models particularly suitable for problems for which the conditional densities p ( x | w i ) are unimodal. 20 B. Leibe Image source: C.M. Bishop, 2006

21 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Topics of This Lecture Linear discriminant functions  Definition  Extension to multiple classes Least-squares classification  Derivation  Shortcomings Generalized linear models  Connection to neural networks  Generalized linear discriminants & gradient descent 21 B. Leibe

22 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 General Classification Problem Classification problem  Let’s consider K classes described by linear models  We can group those together using vector notation where  The output will again be in 1-of-K notation.  We can directly compare it to the target value. 22 B. Leibe

23 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 General Classification Problem Classification problem  For the entire dataset, we can write and compare this to the target matrix T where  Result of the comparison: 23 B. Leibe Goal: Choose such that this is minimal!

24 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Least-Squares Classification Simplest approach  Directly try to minimize the sum-of-squares error  We could write this as  But let’s stick with the matrix notation for now…  (The result will be simpler to express and we’ll learn some nice matrix algebra rules along the way...) 24 B. Leibe

25 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Least-Squares Classification Simplest approach  Directly try to minimize the sum-of-squares error  We can write this as  Taking the gradient, we get 25 B. Leibe

26 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Least-Squares Classification Setting the gradient to zero: 26 B. Leibe

27 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Least-Squares Classification Multi-class case  Let’s formulate the sum-of-squares error in matrix notation  Taking the derivative yields 27 B. Leibe chain rule:using:

28 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Least-Squares Classification Minimizing the sum-of-squares error  We then obtain the discriminant function as  Exact, closed-form solution for the discriminant function parameters. 28 B. Leibe “pseudo-inverse”

29 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Problems with Least Squares Least-squares is very sensitive to outliers!  The error function penalizes predictions that are “too correct”. 29 B. Leibe Image source: C.M. Bishop, 2006

30 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Problems with Least-Squares Another example:  3 classes (red, green, blue)  Linearly separable problem  Least-squares solution: Most green points are misclassified! Deeper reason for the failure  Least-squares corresponds to Maximum Likelihood under the assumption of a Gaussian conditional distribution.  However, our binary target vectors have a distribution that is clearly non-Gaussian!  Least-squares is the wrong probabilistic tool in this case! 30 B. Leibe Image source: C.M. Bishop, 2006

31 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Topics of This Lecture Linear discriminant functions  Definition  Extension to multiple classes Least-squares classification  Derivation  Shortcomings Generalized linear models  Connection to neural networks  Generalized linear discriminants & gradient descent 31 B. Leibe

32 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Generalized Linear Models 32 B. Leibe Linear model Generalized linear model  g ( ¢ ) is called an activation function and may be nonlinear.  The decision surfaces correspond to  If g is monotonous (which is typically the case), the resulting decision boundaries are still linear functions of x.

33 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Generalized Linear Models Consider 2 classes: 33 B. Leibe with Slide credit: Bernt Schiele

34 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Logistic Sigmoid Activation Function 34 B. Leibe Example: Normal distributions with identical covariance Slide credit: Bernt Schiele

35 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Normalized Exponential General case of K > 2 classes:  This is known as the normalized exponential or softmax function  Can be regarded as a multiclass generalization of the logistic sigmoid. 35 B. Leibe with Slide credit: Bernt Schiele

36 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Relationship to Neural Networks 2-Class case Neural network (“single-layer perceptron”) 36 B. Leibe withconstant Slide credit: Bernt Schiele

37 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Relationship to Neural Networks Multi-class case Multi-class perceptron 37 B. Leibe withconstant Slide credit: Bernt Schiele

38 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Logistic Discrimination If we use the logistic sigmoid activation function… … then we can interpret the y ( x ) as posterior probabilities! 38 B. Leibe Slide adapted from Bernt Schiele

39 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Other Motivation for Nonlinearity Recall least-squares classification  One of the problems was that data points that are “too correct” have a strong influence on the decision surface under a squared-error criterion.  Reason: the output of y ( x n ; w ) can grow arbitrarily large for some x n :  By choosing a suitable nonlinearity (e.g. a sigmoid), we can limit those influences 39 B. Leibe

40 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Discussion: Generalized Linear Models Advantages  The nonlinearity gives us more flexibility.  Can be used to limit the effect of outliers.  Choice of a sigmoid leads to a nice probabilistic interpretation. Disadvantage  Least-squares minimization in general no longer leads to a closed-form analytical solution.  Need to apply iterative methods.  Gradient descent. 40 B. Leibe

41 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Linear Separability Up to now: restrictive assumption  Only consider linear decision boundaries Classical counterexample: XOR 41 B. Leibe Slide credit: Bernt Schiele

42 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Linear Separability Even if the data is not linearly separable, a linear decision boundary may still be “optimal”.  Generalization  E.g. in the case of Normal distributed data (with equal covariance matrices) Choice of the right discriminant function is important and should be based on  Prior knowledge (of the general functional form)  Empirical comparison of alternative models  Linear discriminants are often used as benchmark. 42 B. Leibe Slide credit: Bernt Schiele

43 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Generalized Linear Discriminants Generalization  Transform vector x with M nonlinear basis functions Á j ( x ) :  Purpose of Á j ( x ) : basis functions  Allow non-linear decision boundaries.  By choosing the right Á j, every continuous function can (in principle) be approximated with arbitrary accuracy. Notation 43 B. Leibe with Slide credit: Bernt Schiele

44 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Generalized Linear Discriminants Model  K functions (outputs) y k ( x ; w ) Learning in Neural Networks  Single-layer networks: Á j are fixed, only weights w are learned.  Multi-layer networks: both the w and the Á j are learned.  In the following, we will not go into details about neural networks in particular, but consider generalized linear discriminants in general… 44 B. Leibe Slide credit: Bernt Schiele

45 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Gradient Descent Learning the weights w :  N training data points: X = { x 1, …, x N }  K outputs of decision functions: y k ( x n ; w )  Target vector for each data point: T = { t 1, …, t N }  Error function (least-squares error) of linear model 45 B. Leibe Slide credit: Bernt Schiele

46 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Gradient Descent Problem  The error function can in general no longer be minimized in closed form. ldea (Gradient Descent)  Iterative minimization  Start with an initial guess for the parameter values.  Move towards a (local) minimum by following the gradient.  This simple scheme corresponds to a 1 st -order Taylor expansion (There are more complex procedures available). 46 B. Leibe ´ : Learning rate

47 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Gradient Descent – Basic Strategies “Batch learning”  Compute the gradient based on all training data: 47 B. Leibe Slide credit: Bernt Schiele ´ : Learning rate

48 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Gradient Descent – Basic Strategies “Sequential updating”  Compute the gradient based on a single data point at a time: 48 B. Leibe ´ : Learning rate Slide credit: Bernt Schiele

49 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Gradient Descent Error function 49 B. Leibe Slide credit: Bernt Schiele

50 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Gradient Descent Delta rule (=LMS rule)  where  Simply feed back the input data point, weighted by the classification error. 50 B. Leibe Slide credit: Bernt Schiele

51 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Gradient Descent Cases with differentiable, non-linear activation function Gradient descent 51 B. Leibe Slide credit: Bernt Schiele

52 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Summary: Generalized Linear Discriminants Properties  General class of decision functions.  Nonlinearity g ( ¢ ) and basis functions Á j allow us to address linearly non-separable problems.  Shown simple sequential learning approach for parameter estimation using gradient descent.  Better 2 nd order gradient descent approaches available (e.g. Newton-Raphson). Limitations / Caveats  Flexibility of model is limited by curse of dimensionality – g ( ¢ ) and Á j often introduce additional parameters. –Models are either limited to lower-dimensional input space or need to share parameters.  Linearly separable case often leads to overfitting. –Several possible parameter choices minimize training error. 52

53 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 What Does It Mean? What does it mean to apply a linear classifier? Classifier interpretation  The weight vector has the same dimensionality as x.  Positive contributions where sign( x i ) = sign( w i ).  The weight vector identifies which input dimensions are important for positive or negative classification (large | w i | ) and which ones are irrelevant (near-zero w i ).  If the inputs x are normalized, we can interpret w as a “template” vector that the classifier tries to match. 53 B. Leibe Input vectorWeight vector

54 Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 References and Further Reading More information on Linear Discriminant Functions can be found in Chapter 4 of Bishop’s book (in particular Chapter 4.1). B. Leibe 54 Christopher M. Bishop Pattern Recognition and Machine Learning Springer, 2006


Download ppt "Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 5 Linear Discriminant Functions 04.11.2013 Bastian Leibe."

Similar presentations


Ads by Google