Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 1: Introduction to Machine Learning Methods

Similar presentations


Presentation on theme: "Lecture 1: Introduction to Machine Learning Methods"β€” Presentation transcript:

1 Lecture 1: Introduction to Machine Learning Methods
Stephen P. Ryan Olin Business School Washington University in St. Louis

2 Structure of the Course
Goals Basic machine learning algorithms Heterogeneous treatment effects Some recent econometric theory advances How to use machine learning in your research projects Sequence: Introduction to various ML techniques Causal inference and ML Random projection with large choice sets Double machine learning Using text data Moment trees for heterogeneous models

3 Shout Out to Elements of Statistical Learning
Free PDF online Convenient summary of many ML techniques by some of the leaders of the field: Trevor Hastie, Robert Tibshirani, and Jerome Friedman Many examples, figures, algorithms in these notes are drawn from this book Other web resources: Christopher Manning and Richard Socher: Natural Language Processing with Deep Learning, Oxford Deep NLP course: /lectures Deep learning:

4 Machine Learning What is machine learning? Key idea: prediction
Econometric theory has provided a general treatment of nonparametric estimation in the last twenty years Highest level: Nothing new under the sun Twist: combine model selection with estimation Fit in the middle ground between fully pre-specified parametric models and completely nonparametric approaches Scalability on large data sets (e.g., so-called big data)

5 Two Broad Types of Machine Learning Methods
Frequency domain: Pre-specify the right-hand side variables Select among those Many methods select a finite number of elements from potentially high- dimensional set Characteristic space: Search over which variables (and their interactions) belong as explanatory variables Don’t need to take a stand on functional form of elements Both approaches require some restrictions on function complexity

6 The General Problem Consider: 𝑦=𝑓(π‘₯,𝛽,πœ–)
Even when x is univariate this can be a complex relationship What are some approaches for estimating this relationship? Nonparametric: kernel regression Semi-nonparametric: series estimation, e.g. b-splines with increasing knot density Parametric: assume functional form, e.g. 𝑦= π‘₯ β€² 𝛽+πœ– Each approach has pros and cons

7 ML Approaches in Characteristic Space
Alternatively, consider growing the set of RHS variables Classification and regression trees do this Recursive partitioning of the characteristic space One starts with just the data and lets the tree decide what interactions matter Nodes split on decision rule Final nodes (β€œleaves”) tree are classification vote or mean value of function

8 ML Approaches in Frequency Domain
Many different ML algorithms approach this problem by saturating the RHS of the model, then assigning most of them zero influence Variants of penalized regression: min 𝑦 𝑖 βˆ’ π‘₯ 𝑖 β€² 𝛽 2 βˆ’πΊ( dim 𝛽 ) Where G is some penalty function Ridge regression: penalize squared beta (normed X) LASSO: penalize sum absolute value beta Square-root LASSO: penalize sqrt sum absolute value beta Support vector machines have slightly different objective function Also series of incremental approaches, simple to complex

9 Comparing LASSO and Ridge Regression

10 Incremental Forward Stagewise Regression

11 Coefficient Paths

12 Basis Functions

13 Increasing Smoothness

14 Splines in General There are many varieties of splines
Smoothing spline is the solution to the following problem: Amazingly, there is a cubic spline that minimizes this objective function All splines can also be written in terms of basis splines, or b-splines B-splines are great and you should think about them when you need to approximate an unknown function

15 B-Spline Construction
Define a set of knots for a b-spline of degree M We place two knots at the endpoints of our data, call that knot 0 and knot K We define K-1 knots on the interior between those two endpoints We also add M knots to the left and the right outside endpoints We then can compute b-splines recursively (see the book for details) Key point: b-splines are defined locally Upshot: numerically stable Approximate a function by least squares: 𝑓 π‘₯ = 𝑖 𝛽 𝑖 𝑏 𝑠 𝑖 (π‘₯)

16 Visual Representation of Basis Functions

17 Bias vs Variance: Smoothing Spline Example

18 Many Techniques Have Tuning Parameters
In general, question is how to determine those tuning parameters? One approach is to use cross-validation Basic idea: estimate on one sample, predict on another Common approach: leave-one-out (LOO) Across all subsamples where one observation is removed, estimate model, predict error for omitted observation, sum up Balances too much bias (overfitting) against too much variance (oversmoothing)

19 Example: Smoothing Splines

20 General Problem: Model Selection and Fit

21 How to Think About This Problem
In general, need to assess fit in-sample Need to assess fit across models Bias-variance decomposition: Optimal solution is to use three-way partitioning of data set: training data (fit model), validation data (choose among models), and test data (show test error)

22 Related to ML Methods Many ML methods seek to balance the variance-bias tradeoff in some fashion We will see honest trees later on building on this principle through sample splitting There are also some stochastic methods, such as bagging and random forests

23 Bootstrap and Bagging Bootstrap refers to the process of resampling your observations to (hopefully) learn something about the population, typically standard errors or confidence intervals Resample with replacement many times, compute statistic of interest on that sample; produces distribution Bagging (bootstrap aggregation) is a similar idea: replace sample with bootstrap samples:

24 Trees One of the settings where bagging can really reduce variance is in trees Trees recursively partition the feature space into rectangles At each node, tree splits sample on basis of some rule (e.g., x3>0.7, or x4={BMW, Mercedes-Benz}) Splits chosen to maximize some criterion (e.g., mean-squared prediction error) Tree grown until some stopping criterion is met Leaf returns type (classification tree) or average value (regression tree)

25 Example of a Tree

26 Bagging a Tree Bagging a tree can often lead to great reductions in the variance of the estimate Why? Bagging replaces single estimate using all the data with ensemble of estimates using data resampled with replacement Let πœ™ π‘₯ be a predictor for a given sample x, and let πœ‡ π‘₯ = 𝐸 π‘₯ (πœ™ π‘₯ ). Then:

27 Random Forest The idea of resampling can be extended to subsampling
The random forest is an ensemble version of the regression tree Key difference: estimate trees on bootstrap samples, but restrict set of variables considered at each split to be a finite subset Why? This helps break correlation across trees That’s useful since the variance of a mean of identically distributed RV’s is: 𝜌 𝜎 2 + 1βˆ’πœŒ 𝐡 𝜎 2

28 Random Forest Algorithm

29 Support Vector Machines (linear kernel)
Support vector machines are a modified penalized regression method: 𝑖 𝑉 𝑦 𝑖 βˆ’π‘“ π‘₯ 𝑖 + πœ† 2 𝛽 2 Where 𝑉 π‘Ÿ = π‘Ÿβˆ’πœ– only if π‘Ÿ>πœ– Basically, going to ignore β€œsmall” errors Nonlinear optimization problem, results in only a subset (β€œsupport vector”) of coefficients being non-zero when 𝑓 π‘₯ 𝑖 is linear

30 So How Is Any of This Useful?
Think about machine learning as combination of model selection and estimation Econometric theory has given us high-level tools for thinking about completely nonparametric estimation These techniques fit between fully parametric and fully nonparametric estimation Key point: we are approximating conditional expectations Economics literature now considering problem of how to take model selection seriously

31 Counterpoint


Download ppt "Lecture 1: Introduction to Machine Learning Methods"

Similar presentations


Ads by Google