Download presentation

Presentation is loading. Please wait.

Published byRobyn Sellon Modified over 2 years ago

1
Regression “A new perspective on freedom” TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A AAA A A

2
Classification

3
? CatDog

4
Cleanliness Size

5
? $$$$$$$$$$

6
Regression

7
$ $$ $$$ $$$$ Price Top speed x y

8
Regression Data Goal: given, predict i.e. find a prediction function

9
Nearest neighbor -50510152025 -10 -5 0 5 10 15

10
Nearest neighbor To predict x –Find the data point x i closest to x –Choose y = y i + No training – Finding closest point can be expensive – Overfitting

11
Kernel Regression To predict X –Give data point x i weight –Normalize weights –Let e.g.

12
Kernel Regression -50510152025 -10 -5 0 5 10 15 [matlab demo]

13
Kernel Regression + No training + Smooth prediction – Slower than nearest neighbor – Must choose width of

14
Linear regression

15
0 10 20 30 40 0 10 20 30 20 22 24 26 Temperature [start Matlab demo lecture2.m] Given examples Predict given a new point 0 10 20 30 40 0 10 20 30 20 22 24 26 Temperature

16
0 10 20 30 40 0 10 20 30 20 22 24 26 Temperature Linear regression Prediction

17
Linear Regression Error or “residual” Prediction Observation Sum squared error

18
Linear Regression n d Solve the system (it’s better not to invert the matrix)

19
Minimize the sum squared error Sum squared error Linear equation Linear system

20
LMS Algorithm (Least Mean Squares) where Online algorithm

21
Beyond lines and planes everything is the same with still linear in 01020 0 40

22
Linear Regression [summary] n d Let For example Let Minimize by solving Given examples Predict

23
Probabilistic interpretation Likelihood

24
Overfitting 02468101214161820 -15 -10 -5 0 5 10 15 20 25 30 [Matlab demo] Degree 15 polynomial

25
Ridge Regression (Regularization) 02468101214161820 -10 -5 0 5 10 15 Effect of regularization (degree 19) with “small” Minimize Solve Let

26
Probabilistic interpretation Likelihood Prior Posterior

27
Locally Linear Regression

28
[source: http://www.cru.uea.ac.uk/cru/data/temperature] 1840186018801900192019401960198020002020 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 Global temperature increase

29
Locally Linear Regression To predict X –Give data point x i weight –Let e.g.

30
Locally Linear Regression + Good even at the boundary (more important in high dimension) – Solve linear system for each new prediction – Must choose width of To minimize Solve Predict where

31
[source: http://www.cru.uea.ac.uk/cru/data/temperature] Locally Linear Regression Gaussian kernel 180

32
[source: http://www.cru.uea.ac.uk/cru/data/temperature] Locally Linear Regression Laplacian kernel 180

33
L1 Regression

34
Sensitivity to outliers High weight given to outliers Influence function

35
L 1 Regression Linear program Influence function

36
Spline Regression Regression on each interval 5200540056005800 50 60 70

37
Spline Regression With equality constraints 5200540056005800 50 60 70

38
Spline Regression With L 1 cost 5200540056005800 50 60 70

39
To learn more The Elements of Statistical Learning, Hastie, Tibshirani, Friedman, Springer

Similar presentations

OK

Regression. So far, we've been looking at classification problems, in which the y values are either 0 or 1. Now we'll briefly consider the case where.

Regression. So far, we've been looking at classification problems, in which the y values are either 0 or 1. Now we'll briefly consider the case where.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on anti cancer therapy Ppt on market friendly state political cartoon Ppt on 60 years of indian parliament videos Ppt on oracle bi publisher Ppt on methods and techniques of data collection Ppt on review writing software Ppt on building information modeling courses Ppt on social networking sites advantages and disadvantages Ppt on class ab power amplifier Ppt on c language history