Chapter 4-5: Analytical Solutions to OLS

Slides:



Advertisements
Similar presentations
Econometric Modeling Through EViews and EXCEL
Advertisements

11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Multiple Regression W&W, Chapter 13, 15(3-4). Introduction Multiple regression is an extension of bivariate regression to take into account more than.
The Simple Regression Model
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
3.2 OLS Fitted Values and Residuals -after obtaining OLS estimates, we can then obtain fitted or predicted values for y: -given our actual and predicted.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Simple Linear Regression. Start by exploring the data Construct a scatterplot  Does a linear relationship between variables exist?  Is the relationship.
1 MF-852 Financial Econometrics Lecture 3 Review of Probability Roy J. Epstein Fall 2003.
1 Chapter 2 Simple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Lecture 8 Relationships between Scale variables: Regression Analysis
Part 1 Cross Sectional Data
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
The Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
The Simple Linear Regression Model: Specification and Estimation
Chapter 10 Simple Regression.
9. SIMPLE LINEAR REGESSION AND CORRELATION
Chapter 12 Simple Regression
Simple Linear Regression
Chapter 3 Simple Regression. What is in this Chapter? This chapter starts with a linear regression model with one explanatory variable, and states the.
Simple Linear Regression
The Simple Regression Model
The Basics of Regression continued
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Statistics and Quantitative Analysis U4320
Topic 3: Regression.
The Simple Regression Model
Lecture 1 (Ch1, Ch2) Simple linear regression
FIN357 Li1 The Simple Regression Model y =  0 +  1 x + u.
Simple Linear Regression and Correlation
Introduction to Regression Analysis, Chapter 13,
Simple Linear Regression Analysis
Simple Linear Regression. Introduction In Chapters 17 to 19, we examine the relationship between interval variables via a mathematical equation. The motivation.
So are how the computer determines the size of the intercept and the slope respectively in an OLS regression The OLS equations give a nice, clear intuitive.
Ordinary Least Squares
Multiple Linear Regression Analysis
Lecture 5 Correlation and Regression
Correlation & Regression
3. Multiple Regression Analysis: Estimation -Although bivariate linear regressions are sometimes useful, they are often unrealistic -SLR.4, that all factors.
Introduction to Linear Regression and Correlation Analysis
Regression Analysis Regression analysis is a statistical technique that is very useful for exploring the relationships between two or more variables (one.
3.1 Ch. 3 Simple Linear Regression 1.To estimate relationships among economic variables, such as y = f(x) or c = f(i) 2.To test hypotheses about these.
Chapter 13: Inference in Regression
Chapter 11 Simple Regression
Hypothesis Testing in Linear Regression Analysis
Inferences in Regression and Correlation Analysis Ayona Chatterjee Spring 2008 Math 4803/5803.
2.4 Units of Measurement and Functional Form -Two important econometric issues are: 1) Changing measurement -When does scaling variables have an effect.
Chapter Three TWO-VARIABLEREGRESSION MODEL: THE PROBLEM OF ESTIMATION
Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.
6. Simple Regression and OLS Estimation Chapter 6 will expand on concepts introduced in Chapter 5 to cover the following: 1) Estimating parameters using.
Correlation & Regression Analysis
Chapter 8: Simple Linear Regression Yang Zhenlin.
I271B QUANTITATIVE METHODS Regression and Diagnostics.
11 Chapter 5 The Research Process – Hypothesis Development – (Stage 4 in Research Process) © 2009 John Wiley & Sons Ltd.
ANOVA, Regression and Multiple Regression March
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Lecture 1: Basic Statistical Tools. A random variable (RV) = outcome (realization) not a set value, but rather drawn from some probability distribution.
Intro to Statistics for the Behavioral Sciences PSYC 1900 Lecture 7: Regression.
1 Ka-fu Wong University of Hong Kong A Brief Review of Probability, Statistics, and Regression for Forecasting.
Bivariate Regression. Bivariate Regression analyzes the relationship between two variables. Bivariate Regression analyzes the relationship between two.
Lecture 6 Feb. 2, 2015 ANNOUNCEMENT: Lab session will go from 4:20-5:20 based on the poll. (The majority indicated that it would not be a problem to chance,
Inference about the slope parameter and correlation
The simple linear regression model and parameter estimation
Chapter 4 Basic Estimation Techniques
Ch. 2: The Simple Regression Model
The Simple Regression Model
Ch. 2: The Simple Regression Model
Simple Linear Regression
The Simple Regression Model
Presentation transcript:

Chapter 4-5: Analytical Solutions to OLS EC339: Lecture 7 Chapter 4-5: Analytical Solutions to OLS

The Linear Regression Model Postulate: The dependent variable, Y, is a function of the explanatory variable, X, or Yi = ƒ(Xi) However, the relationship is not deterministic Value of Y is not completely determined by value of X Thus, we incorporate an error term (residual) into the model which provides a statistical relationship Yi = ƒ(Xi) + ui

The Simple Linear Regression Model (SLR) Remember we are trying to predict Y for a given X. We assume a linear relationship (in the parameters (i.e., the BETAS)) Ceteris Paribus—All else held equal To account for our ERROR in prediction, we can add an error term to our prediction. If Y is a linear function of X then ERRORS are typically written as u, or epsilon representing ANYTHING ELSE that might cause the deviation between actual and predicted values We are interested in determining the intercept (0) and slope (1)

SLR Uses Multivariate Expectations Univariate Distributions Means, Variances, Standard Deviations Multivariate Distributions Correlation, Covariance Marginal, Joint, and Conditional Probabilities Joint Probability Density Fn. Conditional Expectation Marginal Probability Density Fn. Conditional Probability Density Fn.

Joint Distributions Joint Distribution Probability Density Functions Now want to consider how Y and X are distributed when considered together INDEPENDENCE When outcomes of X and Y have no influence on one another, the joint probability is equal to the product of the marginal probability density function Think about BINOMIAL DISTRIBUTIONS, each TRIAL is INDEPENDENT and has no effect on the subsequent trial. Also, think of marginal distributions much like a histogram of a single variable.

Conditional Distributions Conditional Probability Density Functions Now want to consider how Y is distributed when GIVEN a certain value for X Conditional Probability of Y occurring given X, is equal to the joint probability of X and Y, divided by the marginal probability of X occurring in the first place INDEPENDENCE If X and Y are independent then the conditional distribution shows these as marginal distributions. Just as if there is no new information. A joint probability is like finding the probability of a “high school graduate” with an hourly wage between “$8 and $10” if looking at education and wage data.

Discrete Bivariate Distributions—Joint Probability Function For example, assume we flip a coin 3 times, recording the number of heads (H) X = number of heads on the last (3rd) flip Y = total number of heads in three flips S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} X takes on the values {0,1} Y takes on the values {0,1,2,3} There are 8 possible different joint outcomes (X = 0,Y = 0) (X = 0,Y = 1) (X = 0,Y = 2) (X = 0,Y = 3) (X = 1,Y = 0) (X = 1,Y = 1) (X = 1,Y = 2) (X = 1,Y = 3) Attaching a probability to each of the different joint outcomes gives us a discrete bivariate probability distribution or joint probability function Thus, ƒ(x,y) gives us the probability the random variables X and Y assume the joint outcome (x,y)

Properties of Covariance If X and Y are discrete If X and Y are continuous If X and Y are independent then cov(X,Y) = 0

Properties of Conditional Expectations

The Linear Regression Model Ceteris Paribus—All else held equal Conditional Expectations can be linear or nonlinear—We will only examine LINEAR functions here.

The Linear Regression Model For any given level of X many possible values of Y can exist If Y is a linear function of X then Yi = 0 + 1Xi + ui u represents the deviation between the actual value of Y and the predicted value of Y (or 0 + 1X1i) We are interested in determining the intercept (0) and slope (1)

The Simple Linear Regression Model (SLR) Thus, what we are looking for is the Conditional Expectation of Y Given values of X. This is what we have called Y-hat thus far. We are trying to predict values of Y given values of X. To do this we must hold ALL OTHER FACTORS FIXED (Ceteris Paribus).

The Simple Linear Regression Model (SLR) LINEAR POPULATION REGRESSION FUNCTION We can assume that the EXPECTED VALUE of our error term is zero. If the value were NOT equal to zero, we could alter this expected value to equal zero by altering the INTERCEPT to account for this fact. This makes no statement about how X and the errors are related. IF u and X are unrelated linearly, their CORRELATION will equal zero! Correlation is not sufficient though, since they could be related NONLINEARLY… Conditional probability gives sufficient conditions as it looks at ALL values of u, given a value for X. This is zero conditional mean error.

The Linear Regression Model: Assumptions Several assumptions must be made about the random variable error term The mean error is zero, or E(ui) = 0 Errors above and below the regression line tend to balance out Errors can arise due to Human behavior (may be unpredictable) A large number of explanatory variables are not in the model Imperfect measuring of dependent variable

The Simple Linear Regression Model (SLR) Beginning with the simple linear regression, taking conditional expectations, and using our current assumptions gives us the POPULATION REGRESSION FUNCTION (Notice, no hats over the Betas, and that y, is equal to the predicted value, plus an error).

The Linear Regression Model The regression model asserts that the expected value of Y is a linear function of X E(Yi) = 0 + 1X1i Known as the population regression function From a practical standpoint not all of a population’s observations are available Thus we typically estimate the slope and intercept using sample data

The Simple Linear Regression Model (SLR) We can also make the following assumptions knowing that E[u|x]=0 WE NOW HAVE TWO EQUATIONS IN TWO UNKNOWNS!! (The Beta’s are the unknowns). This is how the Method of Moments is constructed.

The Linear Regression Model: Assumptions Additional assumptions are necessary to develop confidence intervals and perform hypothesis tests i all for ) var(ui 2 s u = Says that errors are drawn from a distribution with a constant variance (heteroskedasticity exists if this assumption fails) ui and uj are independent One observation’s error does not influence another observation’s error—errors are uncorrelated (serial correlation of errors exist if this assumption fails) Cov(ui,uj) = 0 for all i  j

The Linear Regression Model: Assumptions Cov(Xi,ui) = 0 for all i Error term is uncorrelated with the explanatory variable, X 2 s e ui ~ N(0, ) Error term follows a normal distribution

The Linear Regression Model: Assumptions Cov(Xi,ui) = 0 for all i Error term is uncorrelated with the explanatory variable, X Error term follows a normal distribution

Ordinary Least Squares-Fit

Ordinary Least Squares-Fit

Ordinary Least Squares-Fit

Estimation (Three Ways-We will not discuss Maximum Likelihood) We need a formal method to determine the line that “fits” the data well Distance of the line from observations should be minimized ^ Let Yi = 0 + 1X1i The deviation of the observation from the line is the estimated error, or residual (ui) ^ ui = Yi - Yi

Ordinary Least Squares Designed to minimize the magnitude of estimated residuals Selecting an estimated slope and estimated intercept that minimizes the sum of the squared errors Most popular method known as Ordinary Least Squares

Ordinary Least Squares—Minimize Sum of Squared Errors Identifying the parameters (estimated slope and estimated y-intercept) that minimize the sum of the squared errors is a standard optimization problem in multivariable calculus Take first derivatives with respect to the estimated slope coefficient and estimated y-intercept coefficient Set both equations equal to zero and solve the two equations

Ordinary Least Squares

Ordinary Least Squares-Derived

Ordinary Least Squares This results in the normal equations Which suggests an estimator for the intercept. The means of X and Y are ALWAYS on the regression line.

Ordinary Least Squares Which yields an estimator for the slope of the line No other estimators will result in a smaller sum of squared errors

SLR Assumption 1 Linear in Parameters SLR.1 Defines POPULATION model The dependent variable y is related to the independent variable x and the error (or disturbance) u as SLR.1 b0 and b1 are population parameters

SLR Assumption 2 Random Sampling Use a random sample of size n, {xi,yi): i=1,2,…,n} from the population model Allows redefinition of SLR.1. Want to use DATA to estimate our parameters b0 and b1 are population parameters to be estimated

SLR Assumption 3 Sample variation in independent variable X values must vary. The variance of X cannot equal zero

SLR Assumption 4 Zero Conditional Mean For a random sample, implication is that NO independent variable is correlated with ANY unobservable (remember error includes unobservable data)

SLR Theorem 1 Unbiasedness of OLS, estimators should equal the population value in expectation This holds because x and u are assumed to be uncorrelated. Thus our estimator equals the actual value of Beta

SLR Theorem 1 Unbiasedness of OLS, estimators should equal the population value in expectation The expected value of the residuals is zero. Thus our estimator equals the actual value of Beta

SLR Assumption 5 Homoskedasticity The variance of the errors is INDEPENDENT of the values of X.

Method of Moments Seeks to equate the moments implied by a statistical model of the population distribution to the actual moments found in the sample Certain restrictions are implied in the population E(u) = 0 Cov(Xi,uj) = 0 i,j Results in the same estimators as least squares method

Interpretation of the Regression Slope Coefficient The coefficient, 1, tells us the effect X has on Y Increasing X by one unit will change the mean value of Y by 1 units

Units of Measurement and Regression Coefficients Magnitude of regression coefficients depends upon the units in which the dependent and explanatory variables are measured For example, using cents versus dollars will result in smaller coefficients Changing both the Y and X variables by the same amount will not affect the slope although it will impact the y-intercept

Models Including Logarithms For a log-linear model the slope represents the proportionate (like percentage change) change in Y arising from a unit change in X The coefficients in your regression result in the SEMI-elasticity of Y with respect to X For a log-log model the slope represents the proportionate change in Y arising from a proportionate change in X The coefficients in your regression results in the elasticity of Y with respect to X. This is the CONSTANT ELASTICITY MODEL. For a linear-log model the slope is the unit change in Y arising from a proportionate change in X

Regression in Excel Step 1: Reorganize data so that variables are right next to one another in columns Step 2: Data AnalysisRegression

Regression in Excel-Ex. 2.11

Regression in Excel

Regression in Excel

Regression in Excel T-statistics show that the coefficient on ceoten is insignificant at the 5% level. The p-value for ceoten is 0.128368 which is greater than .05, meaning that you could see this value about 13% of the time. You are Inherently testing the null hypothesis that all coefficients are equal to ZERO. YOU FAIL TO REJECT THE NULL HYPOTHESIS HERE ON BETA-1.

Regression in Excel

Regression in Excel