Ordinary Least-Squares

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Eigen Decomposition and Singular Value Decomposition
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Chapter 6 Eigenvalues and Eigenvectors
MS&E 211 Quadratic Programming Ashish Goel. A simple quadratic program Minimize (x 1 ) 2 Subject to: -x 1 + x 2 ≥ 3 -x 1 – x 2 ≥ -2.
The General Linear Model. The Simple Linear Model Linear Regression.
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
Dan Witzner Hansen  Groups?  Improvements – what is missing?
3D Geometry for Computer Graphics. 2 The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Simple Linear Regression
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Lesson #32 Simple Linear Regression. Regression is used to model and/or predict a variable; called the dependent variable, Y; based on one or more independent.
1 MF-852 Financial Econometrics Lecture 2 Matrix Operations in Econometrics, Optimization with Excel Roy J. Epstein Fall 2003.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Lecture 12 Least Square Approximation Shang-Hua Teng.
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Computer Graphics Recitation The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Linear regression models in matrix terms. The regression function in matrix terms.
Chapter 10 Real Inner Products and Least-Square (cont.)
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
Ordinary Least-Squares Emmanuel Iarussi Inria. Many graphics problems can be seen as finding the best set of parameters for a model, given some data Surface.
Sections 1.8/1.9: Linear Transformations
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
Section 3.5 Lines and Planes in 3-Space. Let n = (a, b, c) ≠ 0 be a vector normal (perpendicular) to the plane containing the point P 0 (x 0, y 0, z 0.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
Gu Yuxian Wang Weinan Beijing National Day School.
AN ORTHOGONAL PROJECTION
Biostatistics Lecture 17 6/15 & 6/16/2015. Chapter 17 – Correlation & Regression Correlation (Pearson’s correlation coefficient) Linear Regression Multiple.
Orthogonality and Least Squares
Curve-Fitting Regression
Chapter 7 Inner Product Spaces 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
1 Multiple Regression A single numerical response variable, Y. Multiple numerical explanatory variables, X 1, X 2,…, X k.
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Math 4030 – 11b Method of Least Squares. Model: Dependent (response) Variable Independent (control) Variable Random Error Objectives: Find (estimated)
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
Class 5 Multiple Regression CERAM February-March-April 2008 Lionel Nesta Observatoire Français des Conjonctures Economiques
Stats & Summary. The Woodbury Theorem where the inverses.
1 AAEC 4302 ADVANCED STATISTICAL METHODS IN AGRICULTURAL RESEARCH Part II: Theory and Estimation of Regression Models Chapter 5: Simple Regression Theory.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
GEOMETRIC CAMERA CALIBRATION The Calibration Problem Least-Squares Techniques Linear Calibration from Points Reading: Chapter 3.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
Lecture 16: Image alignment
Chapter 7 Inner Product Spaces
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Matrices Definition: A matrix is a rectangular array of numbers or symbolic elements In many applications, the rows of a matrix will represent individuals.
Simple Regression Mary M. Whiteside, PhD.
Some useful linear algebra
The regression model in matrix form
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Linear regression Fitting a straight line to observations.
4.6: Rank.
Orthogonality and Least Squares
1.3 Vector Equations.
Linear Algebra Lecture 37.
Linear Algebra Lecture 3.
Linear Algebra Lecture 39.
Contact: Machine Learning – (Linear) Regression Wilson Mckerrow (Fenyo lab postdoc) Contact:
Linear Algebra Lecture 23.
Linear Algebra Lecture 20.
Linear Algebra Lecture 6.
Linear Algebra Lecture 7.
Null Spaces, Column Spaces, and Linear Transformations
Orthogonality and Least Squares
Regression Models - Introduction
Approximation of Functions
Presentation transcript:

Ordinary Least-Squares

Outline Linear regression Geometry of least-squares Discussion of the Gauss-Markov theorem Ordinary Least-Squares

One-dimensional regression Ordinary Least-Squares

One-dimensional regression Find a line that represent the ”best” linear relationship: Ordinary Least-Squares

One-dimensional regression Problem: the data does not go through a line Ordinary Least-Squares

One-dimensional regression Problem: the data does not go through a line Find the line that minimizes the sum: Ordinary Least-Squares

One-dimensional regression Problem: the data does not go through a line Find the line that minimizes the sum: We are looking for that minimizes Ordinary Least-Squares

Matrix notation and Using the following notations Ordinary Least-Squares

Matrix notation and Using the following notations we can rewrite the error function using linear algebra as: Ordinary Least-Squares

Matrix notation and Using the following notations we can rewrite the error function using linear algebra as: Ordinary Least-Squares

Multidimentional linear regression Using a model with m parameters Ordinary Least-Squares

Multidimentional linear regression Using a model with m parameters Ordinary Least-Squares

Multidimentional linear regression Using a model with m parameters Ordinary Least-Squares

Multidimentional linear regression Using a model with m parameters and n measurements Ordinary Least-Squares

Multidimentional linear regression Using a model with m parameters and n measurements Ordinary Least-Squares

Ordinary Least-Squares

Ordinary Least-Squares

parameter 1 Ordinary Least-Squares

parameter 1 measurement n Ordinary Least-Squares

Minimizing Ordinary Least-Squares

Minimizing Ordinary Least-Squares

Minimizing is flat at Ordinary Least-Squares

Minimizing is flat at Ordinary Least-Squares

Minimizing is flat at does not go down around Ordinary Least-Squares

Minimizing is flat at does not go down around Ordinary Least-Squares

Positive semi-definite In 1-D In 2-D Ordinary Least-Squares

Minimizing Ordinary Least-Squares

Minimizing Ordinary Least-Squares

Minimizing Ordinary Least-Squares

Minimizing Always true Ordinary Least-Squares

Minimizing The normal equation Always true Ordinary Least-Squares

Geometric interpretation Ordinary Least-Squares

Geometric interpretation b is a vector in Rn Ordinary Least-Squares

Geometric interpretation b is a vector in Rn The columns of A define a vector space range(A) Ordinary Least-Squares

Geometric interpretation b is a vector in Rn The columns of A define a vector space range(A) Ax is an arbitrary vector in range(A) Ordinary Least-Squares

Geometric interpretation b is a vector in Rn The columns of A define a vector space range(A) Ax is an arbitrary vector in range(A) Ordinary Least-Squares

Geometric interpretation is the orthogonal projection of b onto range(A) Ordinary Least-Squares

The normal equation: Ordinary Least-Squares

The normal equation: Existence: has always a solution Ordinary Least-Squares

The normal equation: Existence: has always a solution Uniqueness: the solution is unique if the columns of A are linearly independent Ordinary Least-Squares

The normal equation: Existence: has always a solution Uniqueness: the solution is unique if the columns of A are linearly independent Ordinary Least-Squares

Under-constrained problem Ordinary Least-Squares

Under-constrained problem Ordinary Least-Squares

Under-constrained problem Ordinary Least-Squares

Under-constrained problem Poorly selected data One or more of the parameters are redundant Ordinary Least-Squares

Under-constrained problem Poorly selected data One or more of the parameters are redundant Add constraints Ordinary Least-Squares

How good is the least-squares criteria? Optimality: the Gauss-Markov theorem Ordinary Least-Squares

How good is the least-squares criteria? Optimality: the Gauss-Markov theorem Let and be two sets of random variables and define: Ordinary Least-Squares

How good is the least-squares criteria? Optimality: the Gauss-Markov theorem Let and be two sets of random variables and define: If Ordinary Least-Squares

How good is the least-squares criteria? Optimality: the Gauss-Markov theorem Let and be two sets of random variables and define: If Then is the best unbiased linear estimator Ordinary Least-Squares

b ei a no errors in ai Ordinary Least-Squares

b b ei ei a a no errors in ai errors in ai Ordinary Least-Squares

b a homogeneous errors Ordinary Least-Squares

b b a a homogeneous errors non-homogeneous errors Ordinary Least-Squares

b a no outliers Ordinary Least-Squares

outliers b b a a no outliers outliers Ordinary Least-Squares