Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce

Similar presentations


Presentation on theme: "1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce"— Presentation transcript:

1 1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce g.pryce@socsci.gla.ac.uk

2 2 Notices: n Register

3 3 Aims and Objectives: n Aim: –to complete our introduction to multiple regression n Objectives: –by the end of this lecture students should be able to: understand and apply ANOVA understand how to use regression for prediction understand the assumptions underlying regression and the properties of estimates if these assumptions are met

4 4 Last week: n 1. Correlation Coefficients n 2. Multiple Regression –OLS with more than one explanatory variable n 3. Interpreting coefficients –b k estimates how much y  if x k  by one unit. n 4. Inference b k only a sample estimate, thus distribution of b k across lots of samples drawn from a given population –confidence intervals –hypothesis testing n 5. Coefficient of Determination: R 2 and Adj R 2

5 5 Plan of today’s lecture: n 1. Prediction n 2. ANOVA in regression n 3. F-Test n 4. Regression assumptions n 5. Properties of OLS estimates

6 6 1. Prediction n Given that the regression procedure provides estimates the values of coefficients, we can use these estimates to predict the value of y for given values of x: –e.g. Income, education & experience from L1: –Implies the following equation: ŷ = -4.2 + 1.45 x 1 + 2.63 x 2

7 7 Predicting y for particular values of x k n We can use this equation to predict the value of y for particular values of x k : –e.g. what is the predicted income of someone with 3 years of post-school education & 1 year experience? ŷ = -4.2 + 1.45 x 1 + 2.63 x 2 = -4.2 + 1.45  (3) + 2.63 (1) = £2,780 –How does this compare with the predicted income of someone with 1 year of post-school education and 3 years work experience? ŷ = -4.2 + 1.45 x 1 + 2.63 x 2 = -4.2 + 1.45  (1) + 2.63 (3) = £5,140

8 8 Predicting y for each value of x k in the data set: ŷ i = -4.2 + 1.45 x 1i + 2.63 x 2i

9 9 Residuals, e i = prediction error. ê i = y i – ŷ i = y i – (b 0 + b 1 x 1 + b 2 x 2 ) where b 0, b 1, and b 2 are our sample estimates of the population coefficients:  0,  1,  2 ŷ = -4.2 + 1.45 x 1i + 2.63 x 2i + e i

10 10 Forecasting n If the observations in the regression are not individuals, but time periods –e.g. observation 1 = 1970, observation 2 = 1971 n and if you know (or can guess) what the value of x k will be in the next period, then you can use the estimated regression equation to predict what y will be next period.

11 11 2. ANOVA in regression n The variance of y is calculated as the sum of squared deviations from the mean divided by the degrees of freedom: n Analysis of variance is about examining the proportion of this variance that is explained by the regression, and the proportion of the variance that cannot be explained by the regression (reflected in the random error term)

12 12 n This amounts to an analysis of the numerator in the variance equation – the sum of squared deviations of y from the mean. –the denominator is constant for all analysis on a particular sample the error variance, for example, will have the same denominator as the variance of y. –the sum of squared deviations from the mean without dividing by (n-1) is called the “Total Sum of Squares”

13 13 –The variation in ŷ, the predicted values of y for the observed values of the explanatory variables in our sample, can be thought of as the explained variation in y, If we square the deviations of ŷ from the mean value of y, we get the explained sum of squares, often called the Regression Sum of Squares. REGSS measures the sample variation in ŷ

14 14 n When a line of best fit is calculated, we get errors (unless the line fits perfectly) and this can be thought of as unexplained variation in y –We calculate the residual or error for a particular observation i as the difference between our observed value of the dependent variable, y i, and the value predicted by our model, ŷ i : ê i = y i - ŷ i –if we square these errors – or residuals – before adding them up we get the residual sum of squares (RSS) –RSS represents the degree of unexplained variation in y.

15 15 n Total variation in y is called the Total Sum of Squares (TSS) n If the REGSS, the explained variation in y, is large relative to the total variation in y, then the regression line is doing a good job of explaining y –i.e. the model fits the data well n If the REGSS, the explained variation in y, is small relative to the total variation in y then the regression model is not doing a good job of explaining y –i.e. the model fits the data poorly

16 16 A useful measure that we have already come across is the proportion of improvement due to the model: R 2 = regression sum of squares / total sum of squares = proportion of the variation of y that can be explained by the model

17 17 TSS = REGSS + RSS n The sum of squared deviations of y from the mean (i.e. the numerator in the variance of y equation) are called the TOTAL SUM OF SQUARES(TSS) n The sum of squared deviations of residuals (error) e are called the RESIDUAL SUM OF SQUARES * (RSS) * sometimes called the “error sum of squares” n The difference between TSS & RSS is called the REGRESSION SUM OF SQUARES # (REGSS) # the REGSS is sometimes called the “explained sum of squares” or “model sum of squares”  TSS = REGSS + RSS

18 18 n R 2 is the proportion of the variation in y that is explained by the regression. n Thus, the explained sum of squares is equal to R 2 times the total variation in y: n Given that RSS is the unexplained variation in y we can say that: REGSS =R 2  TSS RSS =(1-R 2 )  TSS R 2 =REGSS/TSS

19 19 SPSS ANOVA table explained

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27 3. The F-Test n These sums of squares, particularly the RSS, are useful for doing hypothesis tests about groups of coefficients. n The test statistic used in such tests is the F distribution: Where: RSS U = unrestricted residual sum of squares = RSS under H 1 RSS R = unrestricted residual sum of squares = RSS under H 0 r = number of restrictions

28 28 Test for  k = 0  k The most common group coefficient test is that  k = 0  k. (NB  means “for all”) –i.e. there is no relationship between y and any of the explanatory variables. –The hypothesis test has 4 steps: (1) H 0 :  k = 0  k H 1 :  k  0  k (2)  = 0.05, (3) Reject H 0 iff Prob(F > F c ) <  (4) Calculate P = Prob(F>F c ) and conclude. (P is the “Sig.” value reported by SPSS in the ANOVA table)

29 29 n For this particular test: n For this particular test, the F statistic reduces to (R 2 /k)/((1-R 2 )/(n-k-1)) so it isn’t telling us much more than the R 2 RSS U = RSS under H 1 = RSS RSS R = RSS under H 0 = TSS (RSS R = TSS under H 0 because if all coeffs were zero, the explained variation would be zero, and so error element would comprise 100% of the variation in TSS, I.e. RSS under H 0 = 100% TSS = TSS) r = number of restrictions = number of slope coefficients in the regression that we are restricting = equals all slope coefficients = k

30 30 Proof of alternative F calculation:

31 31

32 32

33 33 n Very simply, the ANOVA table F-test can be thought of as the ratio of the mean regression sum of squares and the mean residual sum of squares: F = regression mean squares / residual mean squares –if the line of best fit is good, F is large: the improvement in prediction due the regression will be large (so regression mean squares is large) the difference between the regression line and the observed data will be small (residual MS is small)

34 34 House Price Equation Example:

35 35 4. Regression assumptions For estimation of a and b and for regression inference to be correct: n 1. Equation is correctly specified: –Linear in parameters (can still transform variables) –Contains all relevant variables –Contains no irrelevant variables –Contains no variables with measurement errors n 2. Error Term has zero mean n 3. Error Term has constant variance

36 36 n 4. Error Term is not autocorrelated –I.e. correlated with error term from previous time periods n 5. Explanatory variables are fixed –observe normal distribution of y for repeated fixed values of x n 6. No linear relationship between RHS variables –I.e. no “multicolinearity”

37 37 5. Properties of OLS estimates n If the above assumptions are met, OLS estimates are said to be BLUE: –Best I.e. most efficient = least variance –Linear I.e. best amongst linear estimates –Unbiased I.e. in repeated samples, mean of b =  –Estimates I.e. estimates of the population parameters.

38 38 Summary n 1. ANOVA in regression n 2. Prediction n 3. F-Test n 4. Regression assumptions n 5. Properties of OLS estimates

39 39 Reading: –Chapter 2 of Pryce’s notes on Advanced Regression in SPSS –Chapters 1 and 2 of Kennedy “A Guide to Econometrics” –Achen, Christopher H. Interpreting and Using Regression (London: Sage, 1982). –Chapter 4 of Andy Field, “Discovering statistics using SPSS for Windows : advanced techniques for the beginner”. –Chapters 1 & 2 of Wooldridge Introductory Econometrics


Download ppt "1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce"

Similar presentations


Ads by Google