Today Recap Learning with infinite hypothesis sets Bias Variance The end of learning theory….
The Test Set – Take a look at E out Fixed hypothesis h, N independent data points, and any ε>0 Split your data into two parts D-train,D-test Train on D-train and select hypothesis h Test h on D-test, error Apply Hoeffding bound to Cannot be used for anything else
Generalization Bound Goal: Extend to infinite hypothesis spaces
Dichotomies Dichotomy bit string of length N Fixed set of N points X = (x 1,..,x N ) Hypothesis set Each gives a dichotomy
Growth, Shattering, Breaks If then we say that shatters (x 1,…,x N ) If no data set of size K can be shattered by then K is a break point for
VC Dimension For Linear Classification VC Dimension for d dimensional inputs (d+1 parameters with the bias) is d+1 Proof Coming Up Show d vc ≥ d+1 Show d vc < d+2
d vc ≥ d+1 Idea: Make points (Vectors) independent, by using one dimension for each point X is a Matrix (d+1)x(d+1) Rows = Points Invertible Find a set of d+1 points we can shatter
d vc ≥ d+1 Consider Dichotomy Find θ such that Solve
d vc < d+2 They must be linearly dependent (more vectors than dimensions) Ignore zero terms For any d+2 points we must prove there is a dichotomy hyperplanes can not capture, Considerd+1 dimensional points (vectors)
d vc < d+2 Dichotomy: Claim. This dichotomy is impossible to capture with hyperplanes
d vc = d+1 For Hyperplanes d vc = number of free parameters
Vapnik-Chervonenskis (VC) Generalization Bound With Probability at least 1-δ For any tolerance δ >0 Quote Book “The VC Generalization bound is the most important mathematical result in the theory of learning”
VC Generaliation Bound Independent of learning algorithm, target function, P(x), and out of sample error – General Indeed We need to bound Growth function for all of our hypothesis spaces. We showed #free parameters for hyperplanes.
Exercise 2.5 with N=100, Probability that to be within 0.1 of ? Happens with probability 1-δ < 0 e.g. Ridiculous a
Cost of Generality Growth Function was really worst case Independent of P(x), target, out of sample error
Sample Complexity Fix tolerance δ, (success probability ≥ 1-δ) Consider generalization error at most ε. How big N? Upper bound it more by using VC dim polynomial
Sampling Complexity Plug in ε,δ = 0.1 Lets Plot function sc_vcdim() dat = 5000:2500:100000; hold off; plot(dat,dat,'r-','linewidth',3); hold on for i=3:9 tmp = 800*log(40*((2.*dat).^i+1)); plot(dat,tmp,'b-','linewidth',3); end
Sampling Complexity Book Statement. In Practice
VC Interpretation We can learn with infinite hypothesis sets. VC Dimension captures Effective number of parameters/ degrees of Freedom In Sample Error + Model Complexity
As a figure VC Dimension In Sample Error Model Complexity Out of sample error Balance These In Sample Error + Model Complexity
Model Selection t Models m 1,…,m t Which is better? E in (m 1 ) + Ω(m 1 ) E in (m 2 ) + Ω(m 2 ). E in (m t ) + Ω(m t ) Pick the minimum one Problem, Slack in Ω
Vapnik-Chervonenkis (VC) Theorem Test Set Estimate (M=1) a lot tighter With probability 1-delta For any delta > 0
Learning Summary There is theory for regression as well. We will not go there Move on to bias variance Last learning theory in this course.
Bias Variance Decomposition Consider Least Squares Error Measure again and see if we can understand out of sample Error. For simplicity, assume our target function is noiseless, e.g. it is an exact function of the input.
Experiments Take two functions ax+b, cx 2 + dx+e Target function which is ¼ < x < 3/4 Repeatedly pick 3 random data points (x,target(x)) and fit our models Plot it and see
Bias Variance Out of sample error we get depends on hypothesis. Hypothesis is result of learning algorithm which is a function of the training data Training data effects out of sample error Think of data as a random variable and analyze what happens. What happens if we repeatedly sample data and run our learning algorithm.
Notation = Hypothesis learned on D
Bias Variance Bias Variance
Learning Curves Plot of In sample and out of sample error as a function of input size