MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

The Maximum Likelihood Method
1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
The Simple Regression Model
GENERAL LINEAR MODELS: Estimation algorithms
Chapter 12 Simple Linear Regression
Quantitative Methods 2 Lecture 3 The Simple Linear Regression Model Edmund Malesky, Ph.D., UCSD.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
The General Linear Model. The Simple Linear Model Linear Regression.
The Simple Linear Regression Model: Specification and Estimation
Slide 18.1 Time Structured Data MathematicalMarketing Chapter 18 Econometrics This series of slides will cover a subset of Chapter 18  Data and Operators.
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
The Simple Regression Model
Linear and generalised linear models
Computer vision: models, learning and inference
Lecture 2 (Ch3) Multiple linear regression
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Linear and generalised linear models
Basics of regression analysis
Maximum likelihood (ML)
Linear regression models in matrix terms. The regression function in matrix terms.
Simple Linear Regression Analysis
Lecture 10A: Matrix Algebra. Matrices: An array of elements Vectors Column vector Row vector Square matrix Dimensionality of a matrix: r x c (rows x columns)
Today Wrap up of probability Vectors, Matrices. Calculus
Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth.
Correlation and Regression
3.1 Ch. 3 Simple Linear Regression 1.To estimate relationships among economic variables, such as y = f(x) or c = f(i) 2.To test hypotheses about these.
ECON 1150 Matrix Operations Special Matrices
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Chapter 4-5: Analytical Solutions to OLS
MathematicalMarketing Slide 4a.1 Distributions Chapter 4: Part a – The Expectation and Variance of Distributions We will be discussing  The Algebra of.
Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)
Lecture 3: Inference in Simple Linear Regression BMTRY 701 Biostatistical Methods II.
LECTURE 2. GENERALIZED LINEAR ECONOMETRIC MODEL AND METHODS OF ITS CONSTRUCTION.
MTH 161: Introduction To Statistics
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Slide 6.1 Linear Hypotheses MathematicalMarketing In This Chapter We Will Cover Deductions we can make about  even though it is not observed. These include.
Logistic Regression Database Marketing Instructor: N. Kumar.
Systems of Equations and Inequalities Systems of Linear Equations: Substitution and Elimination Matrices Determinants Systems of Non-linear Equations Systems.
Regression Analysis Part C Confidence Intervals and Hypothesis Testing
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
MathematicalMarketing Slide 3a.1 Mathematical Tools Chapter 3: Mathematical Tools We will be reviewing  Exponents and Logarithms.  Scalar Calculus 
Chapter 28 Cononical Correction Regression Analysis used for Temperature Retrieval.
Trees Example More than one variable. The residual plot suggests that the linear model is satisfactory. The R squared value seems quite low though,
Math 4030 – 11b Method of Least Squares. Model: Dependent (response) Variable Independent (control) Variable Random Error Objectives: Find (estimated)
4 basic analytical tasks in statistics: 1)Comparing scores across groups  look for differences in means 2)Cross-tabulating categoric variables  look.
Lecture 8: Ordinary Least Squares Estimation BUEC 333 Summer 2009 Simon Woodcock.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Chapter 14 Introduction to Regression Analysis. Objectives Regression Analysis Uses of Regression Analysis Method of Least Squares Difference between.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
MathematicalMarketing Slide 3c.1 Mathematical Tools Chapter 3: Part c – Parameter Estimation We will be discussing  Nonlinear Parameter Estimation  Maximum.
The Maximum Likelihood Method
Chapter 7. Classification and Prediction
Probability Theory and Parameter Estimation I
CH 5: Multivariate Methods
Regression.
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Linear regression Fitting a straight line to observations.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Parametric Methods Berlin Chen, 2005 References:
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Topic 11: Matrix Approach to Linear Regression
Presentation transcript:

MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the Unknowns in the Regression Model  Some Special Cases of the Model

MathematicalMarketing Slide 5.2 OLS The Basic Regression Model The model expressed in scalars The model elaborated in matrix terms A succinct matrix expression for it y = X  + e

MathematicalMarketing Slide 5.3 OLS Prediction Based on a Linear Combination The E(y) is given by Key Question: How Do We Get Values for the  vector? Together with the previous slide, we can say that

MathematicalMarketing Slide 5.4 OLS Parameter Estimation of  We will cover two philosophies of parameter estimation, the least squares principle and maximum likelihood. Each of these has the following steps:  Pick an objective function to optimize  Find the derivative of f with respect to the unknown parameters  Find values of the unknowns where that derivative is zero The two differ in the first step. The least squares principle would have us pick f = e′e as our objective function which we will minimize.

MathematicalMarketing Slide 5.5 OLS We Wish to Minimize e′e We want to minimize f = e′e over all possible values of elements in the vector  The function f depends on 

MathematicalMarketing Slide 5.6 OLS Minimizing ee Cont'd These two terms are the same so f = yy – 2yX  +  XX .

MathematicalMarketing Slide 5.7 OLS What Is the Derivative? The derivative of a sum is equal to the sum of the derivatives, so we can handle it in pieces. Now we need to determine the derivative, and set it to a column of k zeroes. Our objective function is the sum f = yy – 2yX  +  XX 

MathematicalMarketing Slide 5.8 OLS A Quickie Review of Some Derivative Rules The derivative of a quadratic form: The derivative of a linear combination: The derivative of a transpose: The derivative of a constant:

MathematicalMarketing Slide 5.9 OLS The Derivative of the Sum Is the Sum of the Derivatives f = yy – 2yX  +  XX  (The derivative of a quadratic form) (The derivative of a constant) (The derivative of a linear combination and the derivative of a transpose)

MathematicalMarketing Slide 5.10 OLS Beta Gets a Hat Add these all together and set equal to zero And with some algebra (This one has a name) (This one has a hat)

MathematicalMarketing Slide 5.11 OLS Is Our Formula Any Good?

MathematicalMarketing Slide 5.12 OLS What Do We Really Mean by “Good”? as n  . does not depend on  is smaller than other estimators Unbiasedness Consistency Sufficiency Efficiency

MathematicalMarketing Slide 5.13 OLS Two Key Assumptions The behavior of the estimator is driven by the error input According to the Gauss-Markov Assumption, V(e) =  looks like

MathematicalMarketing Slide 5.14 OLS The Likelihood Principle Consider a sample of 3: 10, 11 and 12. What is  ?

MathematicalMarketing Slide 5.15 OLS Maximum Likelihood According to ML, we should pick values for the parameters that maximize the probability of the sample. To do this we need to follow these steps:  Derive the probability of an observation  Assuming independent observations, calculate the likelihood of the sample using multiplication  Take the log of the sample likelihood  Derive the derivative of the log likelihood with respect to the parameters  Figure out what the parameters must be so that the derivative is equal to a vector of zeroes With linear models we can do this analytically using algebra With non-linear models we sometimes have to use brute force hill “climbing” routines

MathematicalMarketing Slide 5.16 OLS The Probability of Observation y i

MathematicalMarketing Slide 5.17 OLS Multiply Out the Probability of the Whole Sample

MathematicalMarketing Slide 5.18 OLS ln exp(a) = a Take the Log of the Sample Likelihood ln 1 = 0 ln a b = b ln a

MathematicalMarketing Slide 5.19 OLS Figure Out the Derivative and Set Equal to Zero From here we are just a couple of easy algebraic steps away from the normal equations, and the least squares formula, If ML estimators exist for a model, that estimator is guaranteed to be consistent, asymptotically normally distributed and asymptotically efficient.

MathematicalMarketing Slide 5.20 OLS Sums of Squares SS Error = yy - yX(XX) -1 Xy SS Error = SS Total - SS Predictable

MathematicalMarketing Slide 5.21 OLS Using the Covariance Matrix Instead of the Raw SSCP Matrix The covariance matrix of the y and x variables looks like

MathematicalMarketing Slide 5.22 OLS Using Z Scores We can calculate a standard version of using Z scores Or use the correlation matrix of all the variables

MathematicalMarketing Slide 5.23 OLS The Concept of Partialing Imagine that we divided up the x variables into two sets: so that the Beta's were also divided the same way: The model becomes

MathematicalMarketing Slide 5.24 OLS The Normal Equations The normal equations would then be Subtract from the first equation gives us or

MathematicalMarketing Slide 5.25 OLS The Estimator for the First Set Solving for the Estimator for the first set yields Factoring What is this? The usual formula

MathematicalMarketing Slide 5.26 OLS The P and M Matrices Define P = X(XX) -1 X and define M = I – P, = I - X(XX) -1 X.

MathematicalMarketing Slide 5.27 OLS An Intercept Only Model

MathematicalMarketing Slide 5.28 OLS The Intercept Only Model 2 The model becomes

MathematicalMarketing Slide 5.29 OLS The P Matrix in This Case

MathematicalMarketing Slide 5.30 OLS The M Matrix

MathematicalMarketing Slide 5.31 OLS Response Surface Models: Linear vs Quadratic

MathematicalMarketing Slide 5.32 OLS Response Surface Models: The Sign of the Betas