Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Squared Correlation r2 – What Does It Tell Us?

Similar presentations


Presentation on theme: "The Squared Correlation r2 – What Does It Tell Us?"— Presentation transcript:

1 The Squared Correlation r2 – What Does It Tell Us?
Lecture 51 Sec. 13.9 Tue, May 2, 2006

2 Residual Sum of Squares
Recall that the line of “best” fit was that line with the smallest sum of squared residuals. This is also called the residual sum of squares:

3 Other Sums of Squares There are two other sums of squares associated with y. The regression sum of squares: The total sum of squares:

4 Other Sums of Squares The regression sum of squares, SSR, measures the variability in y that is predicted by the model, i.e., the variability in y^. The total sum of squares, SST, measures the observed variability in y.

5 Example – SST, SSR, and SSE
Plot the data in Example 13.10, p. 847, withy. 20 18 16 14 12 10 8 8 10 12 14 16

6 Example – SST, SSR, and SSE
The deviations of y fromy (observed). 20 18 SST 16 14 12 10 8 8 10 12 14 16

7 Example – SST, SSR, and SSE
The deviations of y^ fromy (predicted). 20 18 SSR 16 14 12 10 8 8 10 12 14 16

8 Example – SST, SSR, and SSE
The deviations of y from y^ (residual deviations). 20 18 SSE 16 14 12 10 8 8 10 12 14 16

9 The Squared Correlation
It turns out that It also turns out that

10 The Squared Correlation
Consequently,

11 Example Compute SSE, SSR, and SST for the following data. x y 8 9 10
13 12 14 15 16 19

12 Example Compute SSE, SSR, and SST for the following data. x y y^ 8 9
9.6 10 13 11.8 12 14 14.0 15 16.2 16 19 18.4

13 Example Compute SSE, SSR, and SST for the following data. x y y^ y –y
8 9 9.6 -5 -4.4 10 13 11.8 -1 -2.2 12 14 14.0 0.0 15 16.2 1 2.2 16 19 18.4 5 4.4

14 Example Compute SSE, SSR, and SST for the following data. x y y^ y –y
8 9 9.6 -5 -4.4 25 19.36 10 13 11.8 -1 -2.2 1 4.84 12 14 14.0 0.0 0.00 15 16.2 2.2 16 19 18.4 5 4.4

15 Example Compute SSE, SSR, and SST for the following data. x y y^ y –y
8 9 9.6 -5 -4.4 25 19.36 10 13 11.8 -1 -2.2 1 4.84 12 14 14.0 0.0 0.00 15 16.2 2.2 19 18.4 5 4.4 52 48.40

16 Example We have now found that SST = 52. SSR = 48.4.

17 Example Now calculate SSE. x y y^ 8 9 9.6 10 13 11.8 12 14 14.0 15
16.2 16 19 18.4

18 Example Now calculate SSE. x y y^ y – y^ (y – y^)2 8 9 9.6 -0.6 0.36
10 13 11.8 1.2 1.44 12 14 14.0 0.0 0.00 15 16.2 -1.2 16 19 18.4 0.6

19 Example Now calculate SSE. x y y^ y – y^ (y – y^)2 8 9 9.6 -0.6 0.36
10 13 11.8 1.2 1.44 12 14 14.0 0.0 0.00 15 16.2 -1.2 16 19 18.4 0.6 3.60

20 Example Note that = 52. That is, SSR + SSE = SST.

21 TI-83 – Finding SST, SSR, and SSE
Put the x values into L1 and the y values into L2. Use LinReg(a+bx) L1,L2,Y1 to put the values of y^ in L3. To get SST, Use 1-Var Stats L2. Compute (n – 1)s2. To get SSR, Use 1-Var Stats L3.

22 TI-83 – Finding SST, SSR, and SSE
To get SSE, Compute sum((L2–L3)2).

23 Explaining Variation One goal of regression is to “explain” the variation in y. For example, if x were height and y were weight, how would we explain the variation in weight? That is, why do some people weigh more than others? Or if x were the hours spent studying for a math test and y were the score on the test, how would we explain the variation in scores? That is, why do some people score higher than others?

24 Explaining Variation A certain amount of the variation in y can be explained by the variation in x. Some people weigh more than others because they are taller. Some people score higher on math tests because they studied more. But that is never the full explanation. Not all taller people weigh more. Not everyone who studies more scores higher.

25 Explaining Variation High degree of correlation between x and y  variation in x explains most of the variation in y. Low degree of correlation between x and y  variation in x explains only a little of the variation in y. In other words, the amount of variation in y that is explained by the variation in x should be related to r.

26 Explaining Variation Statisticians consider the predicted variation SSR to be the amount of variation in y (i.e., SST) that is explained by the model. The remaining variation in y, i.e., the residual variation SSE, is the amount that is not explained by the model.

27 Explaining Variation SST = SSE + SSR

28 Explaining Variation SST = SSE + SSR Total variation in y
(to be explained)

29 Explaining Variation SST = SSE + SSR Total variation in y
(to be explained) Variation in y that is explained by the model

30 unexplained by the model
Explaining Variation Variation in y that is unexplained by the model SST = SSE + SSR Total variation in y Variation in y that is explained by the model

31 Example – SST, SSR, and SSE
The total (observed) variation in y. 20 18 16 14 12 10 8 8 10 12 14 16

32 Example – SST, SSR, and SSE
The variation in y that is explained by the model (i.e., due to the variation in x). 20 18 16 14 12 10 8 8 10 12 14 16

33 Example – SST, SSR, and SSE
The variation in y that is not explained by the model (i.e., “random” variation). 20 18 16 14 12 10 8 8 10 12 14 16

34 Explaining Variation Therefore,
r2 is the proportion of variation in y that is explained by the model and 1 – r2 is the proportion that is not explained by the model.

35 TI-83 – Calculating r2 To calculate r2 on the TI-83,
Follow the procedure that produces the regression line and r. In the same window, the TI-83 reports r2.


Download ppt "The Squared Correlation r2 – What Does It Tell Us?"

Similar presentations


Ads by Google