Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classical Regression III

Similar presentations


Presentation on theme: "Classical Regression III"— Presentation transcript:

1 Classical Regression III
Lecture 11 Lecture 11

2 Today’s plan Chi-squared test F test Lecture 11

3 Total = Explained + Unexplained
More on the ANOVA table Remember the sum of squares identity: Total = Explained + Unexplained The sum of squares identity can also be written as Lecture 11

4 More on the ANOVA table (2)
The components of the sum of squares identity can be found in the Stata output as the ANOVA table, which looks like this: Lecture 11

5 Chi-squared test The next test we’ll look at is the 2 (chi-squared) test This does not appear on the Stata output With the 2 test, we are testing the significance of a variance for a particular value Recall that the Z statistic looks like The square of this distribution gives the 2 with one degree of freedom Lecture 11

6 For a 5% significance level
Chi-squared test (2) With the chi-squared test we want to test: H0 : sYX2 = 1 For a 5% significance level The 2 statistic for n-2 degrees of freedom is distributed: Where 2YX is the hypothesized value given under the null hypothesis Lecture 11

7 Chi-squared test (3) The root MSE from the Stata output or Excel output is the square root of So our 2 statistic will be Now look at the 2 table: The format is similar to the F First column gives degrees of freedom Across the top row are the areas remaining in the right-hand tail of the curve The Ho region is the area of the left of the 2 value and the H1 region is the area to the right Lecture 11

8 Chi-squared test (4) From the 2 table, we find that for a 95% confidence interval the 2 value is Since 2 < 228,0, 0.05 we fail to reject the null Lecture 11

9 F-test In the second line down in the right hand column of the Stata output we have a F test, which looks at the significance of this regression equation using the sum of squares information from the ANOVA table Using the sample data, the F test is designed to test the difference of variances between samples drawn from independent populations The F statistic looks like: Where m & n are the respective degrees of freedom Lecture 11

10 F-test (2) Properties of the F distribution:
1) The F distribution is skewed and positive 2) As the degrees of freedom increases for both samples, the F distribution will approximate the normal F(10,2) F(50,50) Lecture 11

11 F-test (3) Last time we looked at the spreadsheet L9.xls and we learned how to use the LINEST function in Excel to run a regression The LINEST output gives you the model (or explained) sum of squares, the residual sum of squares, and the F statistic The regression equation we estimated was Where the numbers in parenthesis are the standard errors for the coefficients Lecture 11

12 F-test (4) F-test where Ho : b = 0 Our F statistic will be
So from our spreadsheet, we can plug in values to calculate our F statistic From the LINEST output, we find that the F statistic is [it’s not too far off] Lecture 11

13 F-test (5) In the F tables, the first column gives the degrees of freedom for the denominator, or df2 Across the top row are the degrees of freedom for the numerator, or df1 For each pair of degrees of freedom, there are four values for the significance level from 0.25 to 0.01 this refers to the area remaining in the tail of the F distribution Lecture 11

14 F-test (6) Now let’s look up the F value with df1 = 1 and df2 =28
F value at a 5% significance level is 4.20: F0.05,1,28 = 4.20 Therefore if F > F0.05,1,28 then we reject the null reject the null that: b = 0 You will use an F-test when you calculate within sample predictions, or when you do a Chow test You will also use an F-test when you want to test the significance of a regression when there are multiple independent (or X) variables Lecture 11


Download ppt "Classical Regression III"

Similar presentations


Ads by Google