Download presentation

Presentation is loading. Please wait.

Published byAshleigh Wiseman Modified about 1 year ago

1
Machine Learning Week 3 Lecture 1

2
Programming Competition

3
Hand In

4
Student Comparison Efter 5 minutter: Train: MCC: [[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]] Test: MCC: [[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]]

5
Today Recap Learning with infinite hypothesis sets Bias Variance The end of learning theory….

6
Recap

7
The Test Set – Take a look at E out Fixed hypothesis h, N independent data points, and any ε>0 Split your data into two parts D-train,D-test Train on D-train and select hypothesis h Test h on D-test, error Apply Hoeffding bound to Cannot be used for anything else

8
Generalization Bound Goal: Extend to infinite hypothesis spaces

9
Dichotomies Dichotomy bit string of length N Fixed set of N points X = (x 1,..,x N ) Hypothesis set Each gives a dichotomy

10
Growth, Shattering, Breaks If then we say that shatters (x 1,…,x N ) If no data set of size K can be shattered by then K is a break point for

11
Revisit Examples Positive Rays Intervals Convex sets a a1a1 a2a2 Impossible Dichotomy

12
2D Hyperplanes 3 is not a break point. Shatter Some Point Set of size 3

13
2D Hyperplanes 4 is break point Impossible Dichotomies 3 Points on a line Triangle With Point Inside Else

14
Very Important Theorem If has a break point then the growth function is polynomial Polynomial of max degree k-1

15
Growth Function Generalization Bound If growth function is polynomial we can learn with infinite hypothesis sets!!! VC Theorem Proof: Book Appendix (Intuition in book)

16
VC Dimension The VC Dimension of a hypothesis set is maximal number of points it can shatter, e.g. max N such that It is denoted d vc The smallest break point minus one

17
Revisit Examples Positive Rays Intervals Convex sets 2D Hyperplanes a a1a1 a2a2 VC Dim. 1 2 ∞ 3 Unbreakable

18
VC Dimension For Linear Classification VC Dimension for d dimensional inputs (d+1 parameters with the bias) is d+1 Proof Coming Up Show d vc ≥ d+1 Show d vc < d+2

19
d vc ≥ d+1 Idea: Make points (Vectors) independent, by using one dimension for each point X is a Matrix (d+1)x(d+1) Rows = Points Invertible Find a set of d+1 points we can shatter

20
d vc ≥ d+1 Consider Dichotomy Find θ such that Solve

21
d vc < d+2 They must be linearly dependent (more vectors than dimensions) Ignore zero terms For any d+2 points we must prove there is a dichotomy hyperplanes can not capture, Considerd+1 dimensional points (vectors)

22
d vc < d+2 Dichotomy: Claim. This dichotomy is impossible to capture with hyperplanes

23
d vc = d+1 For Hyperplanes d vc = number of free parameters

24
Vapnik-Chervonenskis (VC) Generalization Bound With Probability at least 1-δ For any tolerance δ >0 Quote Book “The VC Generalization bound is the most important mathematical result in the theory of learning”

25
VC Generaliation Bound Independent of learning algorithm, target function, P(x), and out of sample error – General Indeed We need to bound Growth function for all of our hypothesis spaces. We showed #free parameters for hyperplanes.

26
Exercise 2.5 with N=100, Probability that to be within 0.1 of ? Happens with probability 1-δ < 0 e.g. Ridiculous a

27
Cost of Generality Growth Function was really worst case Independent of P(x), target, out of sample error

28
Sample Complexity Fix tolerance δ, (success probability ≥ 1-δ) Consider generalization error at most ε. How big N? Upper bound it more by using VC dim polynomial

29
Sampling Complexity Plug in ε,δ = 0.1 Lets Plot function sc_vcdim() dat = 5000:2500:100000; hold off; plot(dat,dat,'r-','linewidth',3); hold on for i=3:9 tmp = 800*log(40*((2.*dat).^i+1)); plot(dat,tmp,'b-','linewidth',3); end

30
Sampling Complexity Book Statement. In Practice

31
VC Interpretation We can learn with infinite hypothesis sets. VC Dimension captures Effective number of parameters/ degrees of Freedom In Sample Error + Model Complexity

32
As a figure VC Dimension In Sample Error Model Complexity Out of sample error Balance These In Sample Error + Model Complexity

33
Model Selection t Models m 1,…,m t Which is better? E in (m 1 ) + Ω(m 1 ) E in (m 2 ) + Ω(m 2 ). E in (m t ) + Ω(m t ) Pick the minimum one Problem, Slack in Ω

34
Vapnik-Chervonenkis (VC) Theorem Test Set Estimate (M=1) a lot tighter With probability 1-delta For any delta > 0

35
Learning Summary There is theory for regression as well. We will not go there Move on to bias variance Last learning theory in this course.

36
Bias Variance Decomposition Consider Least Squares Error Measure again and see if we can understand out of sample Error. For simplicity, assume our target function is noiseless, e.g. it is an exact function of the input.

37
Experiments Take two functions ax+b, cx 2 + dx+e Target function which is ¼ < x < 3/4 Repeatedly pick 3 random data points (x,target(x)) and fit our models Plot it and see

38
Bias Variance Out of sample error we get depends on hypothesis. Hypothesis is result of learning algorithm which is a function of the training data Training data effects out of sample error Think of data as a random variable and analyze what happens. What happens if we repeatedly sample data and run our learning algorithm.

39
Notation = Hypothesis learned on D

40
Bias Variance Bias Variance

41
Bias Variance

42
Learning Curves Plot of In sample and out of sample error as a function of input size

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google