Presentation is loading. Please wait.

Presentation is loading. Please wait.

Biointelligence Laboratory, Seoul National University

Similar presentations


Presentation on theme: "Biointelligence Laboratory, Seoul National University"— Presentation transcript:

1 Biointelligence Laboratory, Seoul National University
Ch 3. Linear Models for Regression (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Summarized by Yung-Kyun Noh Biointelligence Laboratory, Seoul National University

2 (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/
Contents 3.1 Linear Basis Function Models 3.1.1 Maximum likelihood and least squares 3.1.2 Geometry of least squares 3.1.3 Sequential learning 3.1.4 Regularized least squares 3.1.5 Multiple outputs 3.2 The Bias-Variance Decomposition 3.3 Bayesian Lear Regression 3.3.1 Parameter distribution 3.3.2 Predictive distribution 3.3.3 Equivalent kernel (C) 2006, SNU Biointelligence Lab, 

3 Linear Basis Function Models
Linear regression Linear model Linearity in the parameters Using basis functions, allow nonlinear function of the input vector x. Simplify the analysis of this class of models Have some significant limitations M: total number of parameters : basis functions ( : dummy basis function) , (C) 2006, SNU Biointelligence Lab, 

4 (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/
Basis Functions Polynomial functions: Global functions of the input variable  spline functions Gaussian basis functions: Sigmoidal basis functions: Logistic sigmoid functions: Fourier basis  wavelets (C) 2006, SNU Biointelligence Lab, 

5 Maximum Likelihood and Least Squares (1/2)
Assumption: Gaussian noise model : zero mean Gaussian random variable with precision (inverse variance)  Result Conditional mean = (unimodal) For dataset Likelihood: (Drop the explicit x) (C) 2006, SNU Biointelligence Lab, 

6 Maximum Likelihood and Least Squares (2/2)
Maximization of the likelihood function under a conditional Gaussian noise distribution for a linear model is equivalent to minimizing a sum-of-squares error function. Setting the gradient of log likelihood and setting it to zero to get where the NxM design matrix (C) 2006, SNU Biointelligence Lab, 

7 Bias and Precision Parameter by ML
Some other solutions we can get by setting derivative to zero. Bias maximizing log likelihood The bias compensates for the difference between the averages (over the training set) of the target values and the weighted sum of the averages of the basis function values. Noise precision parameter maximizing log likelihood (C) 2006, SNU Biointelligence Lab, 

8 Geometry of Least Squares
If the number M of basis functions is smaller than the number N of data points, then the M vectors will span a linear subspace S of dimensionality M. : jth column of y: linear combination of The least-squares solution for w corresponds to that choice of y that lies in subspace S and that is closest to t. (C) 2006, SNU Biointelligence Lab, 

9 (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/
Sequential Learning On-line learning Technique of Stochastic gradient descent (or sequential gradient descent) For the case of sum-of-squares error function (least-mean-square or the LMS algorithm) (C) 2006, SNU Biointelligence Lab, 

10 Regularized Least Squares
Control over-fitting Total error function Closed form solution: A more general regularizer (C) 2006, SNU Biointelligence Lab, 

11 (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/
General Regularizer In case q=1 in general regularizer ‘lasso’ in the statistical literature Sparse model: corresponding basis functions play no role. Minimizing the unregularized sum-of-squares error s.t. the constraint Contours of the regularization term The lasso gives the sparse solution (C) 2006, SNU Biointelligence Lab, 

12 (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/
Multiple Outputs For K>1 target variables 1. Introduce a different set of basis functions for each component of t. 2. Use the same set of basis functions to model all of the components of the target vector. (W: MxK matrix of parameters) For each variable tk, : pseudo-inverse of (C) 2006, SNU Biointelligence Lab, 

13 The Bias-Variance Decomposition (1/4)
Frequentist viewpoint of the model complexity issue: bias-variance trade-off. Expected squared loss Bayesian: the uncertainty in our model is expressed through a posterior distribution over w. Frequentist: make a point estimate of w based on the data set D. Arises from the intrinsic noise on the data Dependent on the particular dataset D. (C) 2006, SNU Biointelligence Lab, 

14 The Bias-Variance Decomposition (2/4)
The extent to which the average prediction over all data sets differs from the desired regression function. Variance The extent to which the solutions for individual data sets vary around their average. The extent to which the function y(x;D) is sensitive to the particular choice of data set. Expected loss = (bias)2 + variance + noise (C) 2006, SNU Biointelligence Lab, 

15 The Bias-Variance Decomposition (3/4)
 bias-variance trade-off Averaging many solutions for the complex model (M=25) is a beneficial procedure. A weighted averaging (although with respect to the posterior distribution of parameters, not with respect to multiple data sets) of multiple solutions lies at the heart of Bayesian approach. (C) 2006, SNU Biointelligence Lab, 

16 The Bias-Variance Decomposition (4/4)
The average prediction Bias and variance Bias-variance decomposition is based on averages with respect to ensembles of data sets (frequentist perspective). We would be better off combining them into a single large training set. (C) 2006, SNU Biointelligence Lab, 

17 Bayesian Linear Regression (1/2)
Conjugate prior of likelihood Posterior (C) 2006, SNU Biointelligence Lab, 

18 Bayesian Linear Regression (2/2)
Consider prior Corresponding posterior Log of the posterior Other forms of prior over parameters (C) 2006, SNU Biointelligence Lab, 

19 Predictive Distribution (1/2)
Our real interests Uncertainty associated with the parameters w. 0 if N∞ noise Mean of the Gaussian predictive distribution (red line), and predictive uncertainty (shaded region) as the number of data increases. (C) 2006, SNU Biointelligence Lab, 

20 Predictive Distribution (2/2)
Draw samples from the posterior distribution over w. (C) 2006, SNU Biointelligence Lab, 

21 (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/
Equivalent Kernel Mean of the predictive distribution at a point x. Inner product of nonlinear functions Smoother matrix or equivalent kernel (C) 2006, SNU Biointelligence Lab, 


Download ppt "Biointelligence Laboratory, Seoul National University"

Similar presentations


Ads by Google