Presentation on theme: "Session 3 – Linear Regression Amine Ouazad, Asst. Prof. of Economics"— Presentation transcript:
1 Session 3 – Linear Regression Amine Ouazad, Asst. Prof. of Economics EconometricsSession 3 – Linear RegressionAmine Ouazad,Asst. Prof. of Economics
2 Session 3 – Linear Regression Amine Ouazad, Asst. Prof. of Economics EconometricsSession 3 – Linear RegressionAmine Ouazad,Asst. Prof. of Economics
3 Outline of the course Introduction: Identification Introduction: InferenceLinear RegressionIdentification Issues in Linear RegressionsInference Issues in Linear Regressions
4 This session Introduction: Linear Regression What is the effect of X on Y?Hands-on problems:What is the effect of the death of the CEO (X) on firm performance (Y)? (Morten Bennedsen)What is the effect of child safety seats (X) on the probability of death (Y)? (Steve Levitt)
5 This session: Linear Regression Notations.Assumptions.The OLS estimator.Implementation in STATA.The OLS estimator is CAN. Consistent and Asymptotically NormalThe OLS estimator is BLUE.* Best Linear Unbiased Estimator (BLUE)*Essential statistics: t-stat, R squared, Adjusted R Squared, F stat, Confidence intervals.Tricky questions.*Conditions apply
10 Assumptions A1: Linearity A2: Full Rank A3: Exogeneity of the covariatesA4: Homoskedasticity and nonautocorrelationA5: Exogenously generated covariates.A6: Normality of the residuals
11 Assumption A1: Linearity y = f(x1,x2,x3,…,xK)+ey = x1 b1 + x2 b2 + …+xK bK + eIn ‘plain English’:The effect of xk is constant.The effect of xk does not depend on the value of xk’.Not satisfied if :squares/higher powers of x matter.Interaction terms matter.
12 Notations Data generating process Scalar notation Matrix version #1
13 Assumption A2: Full Rank We assume that X’X is invertible.Notes:A2 may be satisfied in the data generating process but not for the observed.Examples:Month of the year dummies/Year dummies, Country dummies, Gender dummies.
14 Assumption A3: Exogeneity i.e. mean independence of the residual and the covariates.E(e|x1,…,xK) = 0.This is a property of the data generating process.Link with selection bias in Session 1?
15 Dealing with Endogeneity You’re assuming that there is no covariate correlated with the Xs that has an effect on Y.If it is only correlated with X with no effect on Y, it’s OK.If it is not correlated with X and has an effect on Y, it’s OK.Example of a problem:Health and Hospital stays.What covariate should you add?Conclusion: Be creative !! Think about unobservables !!
16 Assumption A4: Homoskedasticity and Non Autocorrelation Var(e|x1,…,xK) = s2.Corr(ei, ej|X) = 0.Visible on a scatterplot?Link with t-tests of session 2?Examples: correlated/random effects.
17 Assumption A5 Exogenously generated covariates Instead of requiring the mean independence of the residual and the covariates, we might require their independence.(Recall X and e independent if f(X,e)=f(X)f(e))Sometimes we will think of X as fixed rather than exogenously generated.
18 Assumption A6: Normality of the Residuals The asymptotic properties of OLS (to be discussed below) do not depend on the normality of the residuals: semi-parametric approach.But for results with a fixed number of observations, we need the normality of the residuals for the OLS to have nice properties (to be defined below).
19 3. The Ordinary Least Squares estimator Session 3 – Linear Regression3. The Ordinary Least Squares estimator
20 The OLS Estimator Formula: Two interpretations: Minimization of sum of squares (Gauss’s interpretation).Coefficient beta which makes the observed X and epsilons mean independent (according to A3).
21 OLS estimatorExercise: Find the OLS estimator in the case where both y and x are scalars (i.e. not vectors). Learn the formula by heart (if correct !).
22 Implementation in Stata STATA regress command.regress y x1 x2 x3 x4 x5 …What does Stata do?drops variables that are perfectly correlated. (to make sure A2 is satisfied). Always check the number of observations !Options will be seen in the following sessions.Dummies (e.g. for years) can be included using « xi: i.year ». Again A2 must be satisfied.
23 First things first: Desc. Stats Each variable used in the analysis: Mean, standard deviation for the sample and the subsamples.Other possible outputs: min max, median (only if you care).Source of the dataset.Why??Show the reader the variables are “well behaved”: no outlier driving the regression, consistent with intuition.Number of observations should be constant across regressions (next slide).
24 Reading a table … from the Levitt paper (2006 wp)
25 Other important advice As a best practice always start by regressing y on x with no controls except the most essential ones.No effect? Then maybe you should think twice about going further.Then add controls one by one, or group by group.Explain why coefficient of interest changes from one column to the next. (See next session)
26 Stata tricks Output the estimation results using estout or outreg. Display stars for coefficients’ significance.Outputs the essential statistics (F, R2, t test).Stacks the columns of regression output for regressions with different sets of covariates.Formats: LaTeX and text (Microsoft Word).
27 4. Large sample properties of the ols estimator Session 3 – Linear Regression4. Large sample properties of the ols estimator
28 The OLS estimator is CAN ConsistentAsymptotically NormalProof:Use ‘true’ relationship between y and X to show that b = b + (1/N (X’X)-1 )(1/N (X’e)).Use Slutsky theorem and A3 to show consistency.Use CLT and A3 to show asymptotic normality.V = plim (1/N (X’X)) -1𝑝𝑙𝑖𝑚 𝛽 =𝛽𝑁 ( 𝛽 −𝛽) → 𝑑 𝑁(0,𝑉)
29 OLS is CAN: numerical simulation Typical design of a study:Recruit X% of a population (for instance a random sample of students at INSEAD).Collect the data.Perform the regression and get the OLS estimator.If you perform these steps independently a large number of times (thought experiment), then you will get a normal distribution of parameters.
30 Important assumptions A1, A2, A3 are needed to solve the identification problem:With them, estimator is consistent.A4 is neededA4 affects the variance covariance matrix.Violations of A3? Next session (identif. Issues)Violations of A4? Session on inference issues.
31 5. Finite sample properties of the ols estimator Session 3 – Linear Regression5. Finite sample properties of the ols estimator
32 The OLS Estimator is BLUE Best … i.e. has minimum varianceLinear … i.e. is a linear function of the X and YUnbiased … i.e.Estimator … i.e. it is just a function of the observationsProof (a.k.a. the Gauss Markov Theorem):𝐸 𝛽 =𝛽
33 OLS is BLUE Steps of the proof: OLS is LUE because of A1 and A3. OLS is Best…For any other LUE, such as Cy, CX=Id.Then take the difference Dy= Cy-b. (b is the OLS)Show that Var(b0|X) = Var(b|X) + s2 D’D.The result follows from s2 D’D > 0.
34 Finite sample distribution The OLS estimator is normally distributed for a fixed N, as long as one assumes the normality of the residuals (A6).What is “large” N?Small: e.g. Acemoglu, Johnson and RobinsonLarge: e.g. Bennedsen and Perez Gonzalez.Statistical question: rate of convergence of the law of large numbers.
36 Other examples Large N Compustat (1,000s + observations) Execucomp Scanner dataSmall NCross-country regressions (< 100 points)
37 6. Statistics for reading the output of ols estimation Session 3 – Linear Regression6. Statistics for reading the output of ols estimation
38 Statistics R squared t-test Confidence intervals F statistic. What share of the variance of the outcome variable is explained by the covariates?t-testIs the coefficient on the variable of interest significant?Confidence intervalsWhat interval includes the true coefficient with probability 95%?F statistic.Is the model better than random noise?
40 R SquaredMeasures the share of the variance of Y (the dependent variable) explained by the model Xb, hence R2 = var(Xb)/var(Y).Note that if you regress Y on itself, the R2 is 100%. The R2 is not a good indicator of the quality of a model.
41 Tricky Question Should I choose the model with the highest R squared? Adding a variable mechanically raises the R squared.A model with endogenous variables (thus not interpretable nor causal) can have a high R square.
42 Adjusted R-SquareCorrects for the number of variables in the regression.Proposition: When adding a variable to a regression model, the adjusted R-square increases if and only if the square of the t-statistic is greater than 1.Adj-R2: arbitrary (1, why 1?) but still interesting.𝐴𝑑𝑗 𝑅2=1− 𝑁−1 𝑁−𝐾 (1−𝑅2)
43 t-test and p value p-value: significance level for the coefficient. Significance at 95% : pvalue lower than 0.05.Typical value for t is 1.96 (when N is large, t is normal).Significance at X% : pvalue lower than 1-X.Important significance levels: 10%, 5%, 1%.Depending on the size of the dataset.t-test is valid asymptotically under A1,A2,A3,A4.t-test is valid at finite distance with A6.Small sample t-tests… see Wooldridge NBER conference, “Recent advances in Econometrics.”𝑡= 𝛽 𝑘 𝜎 2 𝑆 𝑘𝑘 →𝑆(𝑁−𝐾)
44 F Statistic Is the model as a whole significant? Hypothesis H0: all coefficients are equal to zero, except the constant.Alternative hypothesis: at least one coefficient is nonzero.Under the null hypothesis, in distribution:𝐹 𝐾−1,𝑁−𝐾 = 𝑅2 𝐾−1 1−𝑅2 𝑁−𝐾 →𝐹(𝐾−1,𝑁−𝐾)
45 Session 3 – Linear Regression 7. Tricky Questions
46 Tricky Questions Can I drop a non significant variable? What if two variables are very strongly correlated (but not perfectly correlated)?How do I deal (simply) with missing/miscoded data?How do I identify influential observations?
47 Tricky Questions Can I drop a non significant variable? A variable may be non significant but still have a significant correlation with other covariates…Dropping the non significant covariate may unduly increase the significance of the coefficient of interest. (recently seen in an OECD working paper).Conclusion: controls stay.
48 Tricky QuestionsWhat if two variables are very strongly correlated (but not perfectly)?One coefficient tends to be very significant and positive…While the coefficient of the other variable is very significant and negative!Beware of multicollinearity.
49 Tricky Questions How do I deal (simply) with missing data? Create dummies for missing covariates instead of dropping them from the regression.If it is the dependent variable, focus on the subset of non missing dependents.Argue in the paper that it is missing at random (if possible).For more advanced material, see session on Heckman selection model.
50 How do I identify influential points? Run the regression with the dataset except the point in question.Identify influential observations by making a scatterplot of the dependent variable and the prediction Xb.
51 Tricky Questions Can I drop the constant in the model? No.Can I include an interaction term (or a square) without the simple terms?
52 Next sessions … looking forward Session 3 – Linear RegressionNext sessions … looking forward
53 Next session What if some of my covariates are measured with error? Income, degrees, performance, network.What if some variable is not included (because you forgot or don’t have it) and still has an impact on y?« Omitted variable bias »
54 Important points from this session REMEMBER A1 to A6 by heart.Which assumptions are crucial for the asymptotics?Which assumptions are crucial for the finite sample validity of the OLS estimator?START REGRESSING IN STATA TODAY !regress and outreg2