Pattern Recognition and Machine Learning

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Pattern Recognition and Machine Learning
Polynomial Curve Fitting BITS C464/BITS F464 Navneet Goyal Department of Computer Science, BITS-Pilani, Pilani Campus, India.
Pattern Recognition and Machine Learning
CS479/679 Pattern Recognition Dr. George Bebis
Biointelligence Laboratory, Seoul National University
Model Assessment, Selection and Averaging
Chapter 4: Linear Models for Classification
Visual Recognition Tutorial
Pattern Recognition and Machine Learning
x – independent variable (input)
Machine Learning CMPT 726 Simon Fraser University
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Classification and Prediction: Regression Analysis
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
PATTERN RECOGNITION AND MACHINE LEARNING
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
CSC2515 Fall 2007 Introduction to Machine Learning Lecture 2: Linear regression All lecture slides will be available as.ppt,.ps, &.htm at
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 14 Introduction to Regression Bastian Leibe.
Perceptual and Sensory Augmented Computing Advanced Machine Learning Winter’12 Advanced Machine Learning Lecture 3 Linear Regression II Bastian.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
CSE 446 Logistic Regression Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
INTRODUCTION TO Machine Learning 3rd Edition
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
BCS547 Neural Decoding.
Sparse Kernel Methods 1 Sparse Kernel Methods for Classification and Regression October 17, 2007 Kyungchul Park SKKU.
Biointelligence Laboratory, Seoul National University
Linear Models for Classification
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Machine Learning 5. Parametric Methods.
Regression. We have talked about regression problems before, as the problem of estimating the mapping f(x) between an independent variable x and a dependent.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Gaussian Process and Prediction. (C) 2001 SNU CSE Artificial Intelligence Lab (SCAI)2 Outline Gaussian Process and Bayesian Regression  Bayesian regression.
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
CS 2750: Machine Learning Linear Regression Prof. Adriana Kovashka University of Pittsburgh February 10, 2016.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Regression Machine Learning. Outline Regression vs Classification Linear regression – another discriminative learning method –As optimization 
Pattern recognition – basic concepts. Sample input attribute, attribute, feature, input variable, independent variable (atribut, rys, příznak, vstupní.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Model Selection and the Bias–Variance Tradeoff All models described have a smoothing or complexity parameter that has to be considered: multiplier of the.
Bias-Variance Analysis in Regression  True function is y = f(x) +  where  is normally distributed with zero mean and standard deviation .  Given a.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
3. Linear Models for Regression 後半 東京大学大学院 学際情報学府 中川研究室 星野 綾子.
CS Statistical Machine learning Lecture 7 Yuan (Alan) Qi Purdue CS Sept Acknowledgement: Sargur Srihari’s slides.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Pattern Recognition and Machine Learning
Probability Theory and Parameter Estimation I
Introduction to Machine Learning
Ch3: Model Building through Regression
Linear Regression (continued)
A Simple Artificial Neuron
Special Topics In Scientific Computing
Bias and Variance of the Estimator
Statistical Learning Dong Liu Dept. EEIS, USTC.
Probabilistic Models for Linear Regression
10701 / Machine Learning Today: - Cross validation,
Pattern Recognition and Machine Learning
Biointelligence Laboratory, Seoul National University
Parametric Methods Berlin Chen, 2005 References:
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Presentation transcript:

Pattern Recognition and Machine Learning Chapter 3: Linear models for regression

Linear Basis Function Models (1) Example: Polynomial Curve Fitting

Sum-of-Squares Error Function

0th Order Polynomial

1st Order Polynomial

3rd Order Polynomial

9th Order Polynomial

Linear Basis Function Models (2) Generally where Áj(x) are known as basis functions. Typically, Á0(x) = 1, so that w0 acts as a bias. In the simplest case, we use linear basis functions : Ád(x) = xd.

Linear Basis Function Models (3) Polynomial basis functions: These are global; a small change in x affect all basis functions.

Linear Basis Function Models (4) Gaussian basis functions: These are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (width).

Linear Basis Function Models (5) Sigmoidal basis functions: where Also these are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (slope).

Maximum Likelihood and Least Squares (1) Assume observations from a deterministic function with added Gaussian noise: which is the same as saying, Given observed inputs, , and targets, , we obtain the likelihood function where

Maximum Likelihood and Least Squares (2) Taking the logarithm, we get where is the sum-of-squares error.

Maximum Likelihood and Least Squares (3) Computing the gradient and setting it to zero yields Solving for w, we get where The Moore-Penrose pseudo-inverse, .

Maximum Likelihood and Least Squares (4) Maximizing with respect to the bias, w0, alone, we see that We can also maximize with respect to ¯, giving

Geometry of Least Squares Consider S is spanned by . wML minimizes the distance between t and its orthogonal projection on S, i.e. y. N-dimensional M-dimensional

Sequential Learning Data items considered one at a time (a.k.a. online learning); use stochastic (sequential) gradient descent: This is known as the least-mean-squares (LMS) algorithm. Issue: how to choose ´?

1st Order Polynomial

3rd Order Polynomial

9th Order Polynomial

Over-fitting Root-Mean-Square (RMS) Error:

Polynomial Coefficients

Regularized Least Squares (1) Consider the error function: With the sum-of-squares error function and a quadratic regularizer, we get which is minimized by Data term + Regularization term ¸ is called the regularization coefficient.

Regularized Least Squares (1) Is it true that or

Regularized Least Squares (2) With a more general regularizer, we have Lasso Quadratic

Regularized Least Squares (3) Lasso tends to generate sparser solutions than a quadratic regularizer.

Understanding Lasso Regularizer Lasso regularizer plays the role of thresholding

Multiple Outputs (1) Analogously to the single output case we have: Given observed inputs, , and targets, , we obtain the log likelihood function

Multiple Outputs (2) Maximizing with respect to W, we obtain If we consider a single target variable, tk, we see that where , which is identical with the single output case.

The Bias-Variance Decomposition (1) Recall the expected squared loss, where The second term of E[L] corresponds to the noise inherent in the random variable t. What about the first term?

The Bias-Variance Decomposition (2) Suppose we were given multiple data sets, each of size N. Any particular data set, D, will give a particular function y(x;D). We then have

The Bias-Variance Decomposition (3) Taking the expectation over D yields

The Bias-Variance Decomposition (4) Thus we can write where

Bias-Variance Tradeoff

The Bias-Variance Decomposition (5) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.

The Bias-Variance Decomposition (6) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.

The Bias-Variance Decomposition (7) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.

The Bias-Variance Trade-off From these plots, we note that an over-regularized model (large ¸) will have a high bias, while an under-regularized model (small ¸) will have a high variance.

Bayesian Linear Regression (1) Define a conjugate prior over w Combining this with the likelihood function and using results for marginal and conditional Gaussian distributions, gives the posterior where

Bayesian Linear Regression (2) A common choice for the prior is for which Next we consider an example …

Bayesian Linear Regression (3) 0 data points observed Prior Data Space

Bayesian Linear Regression (4) 1 data point observed Likelihood Posterior Data Space

Bayesian Linear Regression (5) 2 data points observed Likelihood Posterior Data Space

Bayesian Linear Regression (6) 20 data points observed Likelihood Posterior Data Space

Predictive Distribution (1) Predict t for new values of x by integrating over w: where

Predictive Distribution (1) How to compute the predictive distribution ?

Predictive Distribution (2) Example: Sinusoidal data, 9 Gaussian basis functions, 1 data point

Predictive Distribution (3) Example: Sinusoidal data, 9 Gaussian basis functions, 2 data points

Predictive Distribution (4) Example: Sinusoidal data, 9 Gaussian basis functions, 4 data points

Predictive Distribution (5) Example: Sinusoidal data, 9 Gaussian basis functions, 25 data points

Salesman’s Problem Given a consumer a described by a vector x a product b to sell with base cost c estimated price distribution of b in the mind of a is Pr(t|x, w) = N(xTw, 2) What is price should we offer to a ?

Salesman’s Problem

Bayesian Model Comparison (1) How do we choose the ‘right’ model?

Bayesian Model Comparison (1) How do we choose the ‘right’ model? Assume we want to compare models Mi, i=1, …,L, using data D; this requires computing Posterior Prior Model evidence or marginal likelihood

Bayesian Model Comparison (1) How do we choose the ‘right’ model? Posterior Prior Model evidence or marginal likelihood

Bayesian Model Comparison (3) For a model with parameters w, we get the model evidence by marginalizing over w

Bayesian Model Comparison (4) For a given model with a single parameter, w, con-sider the approximation where the posterior is assumed to be sharply peaked.

Bayesian Model Comparison (4) For a given model with a single parameter, w, con-sider the approximation where the posterior is assumed to be sharply peaked.

Bayesian Model Comparison (5) Taking logarithms, we obtain With M parameters, all assumed to have the same ratio , we get Negative Negative and linear in M.

Bayesian Model Comparison (5) Data matching Model Complexity

Bayesian Model Comparison (6) Matching data and model complexity

What You Should Know Least square and its maximum likelihood interpretation Sequential learning for regression Regularization and its effect Bayesian regression Predictive distribution Bias variance tradeoff Bayesian model selection