Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fundamentals of machine learning 1 Types of machine learning In-sample and out-of-sample errors Version space VC dimension.

Similar presentations


Presentation on theme: "Fundamentals of machine learning 1 Types of machine learning In-sample and out-of-sample errors Version space VC dimension."— Presentation transcript:

1 Fundamentals of machine learning 1 Types of machine learning In-sample and out-of-sample errors Version space VC dimension

2 Unsupervised learning: input only – no labels Coins in a vending machine cluster by size and weight How many clusters are here? Would different attributes make clusters more distinct?

3 Supervised learning: every example has a label Labels have enabled a model based on linear discriminants that will let the vending machine guess coin value without facial recognition.

4 Reinforcement learning: No one correct output Data: input, graded output Find relationship between input and high-grade outputs

5 In-sample error, E in How well do boundaries match training data? Out-of-sample error, E out How often will this system fail if implement in the field?

6 Quality of data mainly determines success of machine learning How many data points? How much uncertainty? We assume each datum is labeled correctly. Uncertainties is in values of attributes

7 Choosing the right model A good model has small in-sample error and generalizes well. Often a tradeoff between these characteristics is required.

8 A type of model defines an hypothesis set A particular member of the set is selected by minimizing some in-sample error. Error definition varies with problem but usually are local. (i.e. accumulated from error in each data point) Linear discrimants

9 9 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) examples of family cars Supervised learning is the focus of this course Example: Dichotomy based on 2 attributes

10 10 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) Assume that blue rectangle is the true boundary of class C In a real problem, of course, we don’t know this. Assume family car (class C) uniquely defined by a range of price and engine power

11 Hypothesis class H : axis aligned rectangles 11 In-sample error on h defined by Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) h = yellow rectangle is a particular member of H Count misclassifications

12 Hypothesis class H : axis aligned rectangles 12 For dataset shown, in-sample error on h is zero, but we expect out-of-sample error to be nonzero Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) h = yellow rectangle is a particular member of H h leaves room for false positives and false negatives

13 Should we expect the negative examples to cluster? family car

14 S, G, and the Version Space 14 most specific hypothesis, S, with no E in most general hypothesis, G any h  H, between S and G is consistent (no error) and makes up the version space Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

15 G S A dichotomizer has been trained by N examples. Results are poor due to limited data. An expert will label any additional attribute vector that I specify. Where should attribute vectors be chosen to make the most effective use of the expert?

16 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) 16 Margin: distance between boundary and closest instance in a specified class S and G hypotheses have narrow margins; not expected to “generalize” well. Even though E in is zero, we expect E out to be large. Why? G S

17 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) 17 Choose h in the version space with largest margin to maximize generalization Data points that determine S and G are shaded. They “support” h with largest margins Logic behind “support vector machines” Greatest distance between S and G

18 Vapnik Chervonenkis Dimension, d VC H is a hypothesis set for a dichotomizer H(X) is set of dichotomies created by application to H to dataset X with N points N points can be labeled + 1 in 2 N ways. Regardless of size of H, |H(X)|bounded by 2 N. H “shatters” N points if there is no way to label the points that is not consistent with some member of H. d VC (H) = k if k is the largest number of points that can be shattered by H. d VC (H) is called the “capacity” of H 18 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

19 Vapnik Chervonenkis Dimension, d VC To prove that d VC = k we get to choose the k points 19 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) To prove that d VC =3 for the 2D linear dichotomizer, better to chose the non- linear black points. Fact that 3 points in line cannot be shattered does not prove d VC < 3.

20 Every set of 4 points has 2 labeling are not linearly separable. k=4 is the break point for the 2D linear dichotomier. d vc (H)+1 is always a break point. For dD dichotomizer, d vc (H) = d+1. Break points

21 What is the VC dimension of the hypothesis class defined by the union of all axis-aligned rectangles?

22 VC dimension is conservative 22 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) VC dimension is based on all possible ways to label examples VC ignores the probability distribution from which dataset was drawn. In real-world, examples with small differences in attributes usually belong to the same class Basis of “similarity” classification methods. family car


Download ppt "Fundamentals of machine learning 1 Types of machine learning In-sample and out-of-sample errors Version space VC dimension."

Similar presentations


Ads by Google