Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lesson 10: Regressions Part I.

Similar presentations


Presentation on theme: "Lesson 10: Regressions Part I."— Presentation transcript:

1 Lesson 10: Regressions Part I

2 Outline Correlation Analysis Regression Analysis
Standard error of estimate Confidence interval and prediction interval Inference about the regression slope Cautions about the interpretation of significance Evaluating the model

3 Correlation Analysis Correlation Analysis is a group of statistical techniques used to measure the strength of the association between two variables. A Scatter Diagram is a chart that portrays the relationship between the two variables. The Dependent Variable is the variable being predicted or estimated. The Independent Variable provides the basis for estimation. It is the predictor variable.

4 Example Suppose a university administrator wishes to determine whether any relationship exists between a student’s score on an entrance examination and that student’s cumulative GPA. A sample of eight students is taken. The results are shown below Student Exam Score GPA A 74 2.6 B 69 2.2 C 85 3.4 D 63 2.3 E 82 3.1 F 60 2.1 G 79 3.2 H 91 3.8

5 Scatter Diagram: GPA vs. Exam Score
We would like to know whether there is a strong linear relationship between the two variables. Cumulative GPA | | | | | | | | | | Exam Score

6 The Coefficient of Correlation, r
The Coefficient of Correlation (r) is a measure of the strength of the linear relationship between two variables. It can range from to 1.00. Values of or 1.00 indicate perfect and strong correlation. Values close to 0.0 indicate weak correlation. Negative values indicate an inverse relationship and positive values indicate a direct relationship.

7 Formula for r We calculate the coefficient of correlation from the following formulas. Sample covariance between x and y Sample standard deviation of x Sample standard deviation of y

8 Coefficient of Determination
The coefficient of determination (r2) is the proportion of the total variation in the dependent variable (Y) that is explained or accounted for by the variation in the independent variable (X). It is the square of the coefficient of correlation. It ranges from 0 to 1. It does not give any information on the direction of the relationship between the variables. Special cases: No correlation: r=0, r2=0. Perfect negative correlation: r=-1, r2=1. Perfect positive correlation: r=+1, r2=1.

9 EXAMPLE 1 Dan Ireland, the student body president at Toledo State University, is concerned about the cost to students of textbooks. He believes there is a relationship between the number of pages in the text and the selling price of the book. To provide insight into the problem he selects a sample of eight textbooks currently on sale in the bookstore. Draw a scatter diagram. Compute the correlation coefficient. Book Page Price ($) Intro to History 500 84 Basic Algebra 700 75 Intro to Psyc 800 99 Intro to Sociology 600 72 Bus. Mgt. 400 69 Intro to Biology 81 Fund. of Jazz 63 Princ. Of Nursing 93

10 Example 1 continued

11 Example 1 continued Book Page Price ($) X Y Intro to History 500 84 Basic Algebra 700 75 Intro to Psyc 800 99 Intro to Sociology 600 72 Bus. Mgt. 400 69 Intro to Biology 81 Fund. of Jazz 63 Princ. Of Nursing 93 Total 4,900 636 The correlation between the number of pages and the selling price of the book is This indicates a moderate association between the variable.

12 EXAMPLE 1 continued Is there a linear relation between number of pages and price of books?
Test the hypothesis that there is no correlation in the population. Use a .02 significance level. Under the null hypothesis that there is no correlation in the population. The statistic follows student t-distribution with (n-2) degree of freedom.

13 EXAMPLE 1 continued Step 1: H0: The correlation in the population is zero. H1: The correlation in the population is not zero. Step 2: H0 is rejected if t>3.143 or if t< There are 6 degrees of freedom, found by n – 2 = 8 – 2 = 6. Step 3: To find the value of the test statistic we use: Step 4: H0 is not rejected. We cannot reject the hypothesis that there is no correlation in the population. The amount of association could be due to chance.

14 Regression Analysis In regression analysis we use the independent variable (X) to estimate the dependent variable (Y). The relationship between the variables is linear.

15 Simple Linear Regression Model
Relationship Between Variables Is a Linear Function b0 and b1 are unknown, therefore, are estimated from the data. Y intercept Random Error Slope y b1 = Rise/Run Rise Dependent (Response) Variable Independent (Explanatory) Variable b0 Run x

16 Finance Application: Market Model
One of the most important applications of linear regression is the market model. It is assumed that rate of return on a stock (R) is linearly related to the rate of return on the overall market (Rm). R = b0 + b1Rm +e Rate of return on a particular stock Rate of return on some major stock index The beta coefficient measures how sensitive the stock’s rate of return is to changes in the level of the overall market.

17 Estimation Method of moments
Given a set of data (x1,y1),…,(xn,yn), how should we estimate the parameters b0 and b1? Start from the simple case – suppose b1 is known to be 0. That is, we have only one parameter to estimate. yi=b0 + ei Possible assumption #1 E(e) = 5. That is E(y) =b0 + E(e) = b0 + 5 Possible assumption #2 E(e) = 0. That is E(y) =b0 + E(e) = b0 b0 has the interpretation of mean of y. Better assumption!!

18 Estimation of b0 Method of moments
Given a set of data (x1,y1),…,(xn,yn), how should we estimate the parameters b0? yi=b0 + ei Assumption: E(e) = 0. That is E(y) =b0 + E(e) = b0 b0 has the interpretation of mean of y. Use the sample analog # 1: E(e) = 0 implies b0 = E(y) Take the estimate as b0 = (y1 + y2 + … + yn)/n = ∑ yi /n

19 Estimation of b0 Method of moments
Given a set of data (x1,y1),…,(xn,yn), how should we estimate the parameters b0? yi=b0 + ei Assumption: E(e) = 0. That is E(y) =b0 + E(e) = b0 b0 has the interpretation of mean of y. Use the sample analog # 1: E(e) = 0 in population. Set the (e1 + e2 + … + en )/n = 0 (y1- b0) + (y2- b0) + … + (yn- b0) = 0 (y1 + y2 + … + yn) - nb0 = 0 b0 = (y1 + y2 + … + yn)/n = ∑ yi /n (yn-b0): deviation of y from the assumed value of b0

20 Estimation of b1 Method of moments
Given a set of data (x1,y1),…,(xn,yn), how should we estimate the parameters b0 and b1? Start from another simple case – suppose b0 is known to be 0. That is, we have only one parameter to estimate. yi=b1xi + ei Possible assumption #1 E(e) = 0. That is E(y) =b1E(x) + E(e) = b1E(x) and b1=E(y)/E(x) Possible assumption #2 E(ex) = 0. That is E(yx) =b1E(x2) + E(ex) = b1E(x2) and b1=E(yx)/E(x2)

21 Estimation of b0 and b1 Method of moments
Given a set of data (x1,y1),…,(xn,yn), how should we estimate the parameters b0 and b1? yi=b0 + b1 xi+ ei Assumption #1 E(e) = 0. If x is known and non-random: E(y) =b0 + b1 x + E(e) = b0 + b1 x If x is unknown and random: E(y) =b0 + b1 E(x) + E(e) = b0 + b1 E(x) This assumption gives us only one equation. Need an extra equation…. Assumption #2….

22 Estimation of b0 and b1 Method of moments
Given a set of data (x1,y1),…,(xn,yn), how should we estimate the parameters b0 and b1? yi=b0 + b1 xi+ ei Assumption #1 E(e) = 0 implies E(y) – b0 – b1 E(x) = 0 Assumption #2 How about E(ex) = 0? Since Cov(e, x) = E(ex) – E(e)E(x) = E(ex), the assumption really imply e and x are uncorrelated. E(ex) =0 implies E[(y – b0 – b1x)x]=0 E[yx – b0x – b1x2]= E(yx) – b0E(x) – b1E(x2) = 0 Two equations are adequate to solve two unknowns.

23 Estimation of b0 and b1 Method of moments
Two equations are adequate to solve two unknowns. E(y) – b0 – b1 E(x) = 0 E(y) E(x) – b0 E(x) – b1 E(x)2 = 0 E(yx) – b0E(x) – b1E(x2) = 0 E(yx) – E(y) E(x) – b1E(x2) + b1 E(x)2 = 0 Cov(x,y) – b1Var(x) = 0 b1= Cov(x,y)/Var(x) b0 = E(y)– b1 E(x)

24 Estimation of b0 and b1 Method of moments
The two assumptions: Assumption #1: E(e) = 0 Assumption #2: E(ex) = 0 implies b1= Cov(x,y)/Var(x) and b0 = E(y)– b1 E(x) To estimate the parameters, use the sample analog: Suffice to compute the Sample covariance between x and y, Sx,y Sample variance of x, Sxx Sample mean of y, my Sample mean of x, mx and use them to replace the corresponding population quantities.

25 Estimation of b0 and b1 Method of moments
The two assumptions: Assumption #1: E(e) = 0 Assumption #2: E(ex) = 0 To estimate the parameters, we can also use the sample analog of the two assumptions directly: Sample analog of E(e) = 0: ∑ (yi – b0 – b1 xi) =0 ∑yi– nb0 – b1 ∑xi =0 Sample analog of E(ex) = 0? ∑ (yi – b0 – b1 xi)xi =0 ∑(yi xi) – b0 ∑xi – b1 ∑xi2 =0

26 Estimation of b0 and b1 Which way is better?
Matching the moments of the two assumptions directly: Assumption #1: E(e) = 0 Assumption #2: E(ex) = 0 Or the implied parameters in terms of the moments b1= Cov(x,y)/Var(x) and b0 = E(y)– b1 E(x) It is not surprise that both approaches yield the same estimator. b1 = Sxy / Sxx and b0 = my – b1 mx

27 E(e |x) = 0 one assumption implies two
By law of iterated expectations E(e |x) = 0 implies E [E(e |x) ] = E[ e ] = 0 (assumption #1) E(e |x) = 0 implies E(e x |x) = 0, implies E[ E(e x |x) ] = E[ e x ] = 0 (assumption #2) What does E(e |x) = 0 mean? E(e |x) =0 E[ (y – b0 – b1 x) | x] = 0 E (y|x) –b0 – b1 x = 0 E (y|x) = b0 + b1 x That the expectation of y conditional on x is b0 + b1 x

28 What if the assumption E(ex) = 0 fails?
Still have: E[ e ] = (assumption #1) Need to find another moment conditions, say another variables z such that E[ e z ] = 0 (assumption #2’) instrumental variable

29 Maximum likelihood Maximum likelihood estimation (MLE) is a popular statistical method used to make inferences about parameters of the underlying probability distribution from a given data set. That is to say, you have a sample of data x1, x2, …, xn and you want to infer the distribution of the random variable x. Commonly, one assumes the data are independent, identically distributed (iid) drawn from a particular distribution with unknown parameters and uses the MLE technique to create estimators for the unknown parameters, that define the distribution.

30 Maximum likelihood Given a sample of data x1, x2, …, xn
We would like to find the distribution P0 such that for all feasible distribution P Prob(P0 | x1,x2,…,xn)  Prob(P | x1,x2,…,xn) For example, the distribution P0 = N(m0,s02)

31 An example: Maximum likelihood
Consider tossing an unfair coin 80 times (i.e., we sample something like x1=H, x2=T, ..., x80=T, and count the number of HEADS "H" observed). Call the probability of tossing a HEAD p, and the probability of tossing TAILS 1-p (so here p is θ above). Suppose we toss 49 HEADS and 31 TAILS, and suppose the coin was taken from a box containing three coins: one which gives HEADS with probability p=1/3, one which gives HEADS with probability p=1/2 and another which gives HEADS with probability p=2/3. The coins have lost their labels, so we don't know which one it was. Using maximum likelihood estimation we can calculate which coin has the largest likelihood, given the data that we observed.

32 An example: Maximum likelihood
The likelihood function (defined below) takes one of three values: Pr(H=49 | p=1/3) = C(80,49) (1/3)49 (1-1/3)31≈0.000 Pr(H=49 | p=1/2) = C(80,49) (1/2)49 (1-1/2)31≈0.012 Pr(H=49 | p=2/3) = C(80,49) (2/3)49 (1-2/3)31≈0.054 Which coin is more likely? The one with p=2/3 because among the three possibilities, the observation of the sample data has the highest probability when p=2/3.

33 An example: Maximum likelihood
Now suppose we had only one coin but its p could have been any value 0 ≤ p ≤ 1. We must maximize the likelihood function: L(p) = C(80,49) p49 (1-p)31 Over all possible values of 0 ≤ p ≤ 1. To find the maximum, take the first derivative of L(p) with respect to p and set to zero. p48(1-p)30(49-80p) = 0 Possible solutions of p are: P=0, P=1 and p=49/80. L(0)=L(1)=0. L(49/80) >0. Hence the maximum likelihood estimator for p is 49/80.

34 L(p) = C(80,49) p49 (1-p)31 over all possible values of 0 ≤ p ≤ 1.

35 To estimate b0 and b1 using ML
Assume ei to be independent identically distributed with normal distribution of zero mean and variance s2. Denote the normal density for ei be f(ei)=f(yi-b0-b1xi) ei = yi - b0 - b1xi normal density (which has a rather ugly formula) The joint likelihood of observing e1, e2, …, en: L = f(e1)*f(e2)*…*f(en)

36 To estimate b0 and b1 using ML (Computer)
We do not know b0 and b1. Nor do we know ei. In fact, our objective is estimate b0 and b1. The procedure of ML: Assume a combination of b0 and b1, call it b0 and b1. Compute the implied ei = yi-b0-b1xi and f(ei)=f(yi-b0-b1xi) Compute the joint likelihood conditional on the assumed values of b0 and b1: L(b0,b1) = f(e1)*f(e2)*…*f(en) Assume many more combination of b0 and b1, and repeat the above two steps, using a computer program (such as Excel). Choose the b0 and b1 that yield a largest joint likelihood.

37 To estimate b0 and b1 using ML (Calculus)
The procedure of ML: Assume a combination of b0 and b1, call it b0 and b1. Compute the implied ei = yi-b0-b1xi and f(ei)=f(yi-b0-b1xi) Compute the joint likelihood conditional on the assumed values of b0 and b1: L(b0,b1) = f(e1)*f(e2)*…*f(en) Choose b0 and b1 to maximize the likelihood function L(b0,b1) – using calculus. Take the first derivative of L(b0,b1) with respect to b0, set it to zero. Take the first derivative of L(b0,b1) with respect to b1, set it to zero. Solve b0 and b1 using the two equations.

38 MM versus ML in the estimation of b0 and b1
It turns out that in this special case, the estimators b0 and b1 turn out to the same as the one obtained by method of moments and OLS (to be discussed later).

39 Alternative assumption X and Y are bivariate normal
Suppose (X1, X2) ~ BVN(m1,m2,s12,s22, s12), The marginal distribution of X1 is normal: X1~N(m1,s12) The conditional distribution of X2 given x1 is normal: X2|x1 ~ N(a + b x1, s2) where a = m2-bm1, b=rs2/s1 = s12/s12, s2 = s22(1-r2)=s22-b2s12 E(X2|X1) Reference: Goldberger, Arthur S. (1991): A course in Econometrics, Harvard University Press. See page 75.

40 Estimation Ordinary least squares
For each value of X, there is a group of Y values, and these Y values are normally distributed. Yi~ N(E(Y|X),i2), i=1,2,…,n The means of these normal distributions of Y values all lie on the straight line of regression. E(Y|X) = 0+1X The standard deviations of these normal distributions are equal. i2= 2 i=1,2,…,n i.e., homoskedasticity

41 Assumptions Underlying Linear Regression
yi and yk are independently drawn from the population, say, as in sampling with replacement. Cov(ei,ej) = 0 for all i ≠ j Note that independence implies much more than zero covariance. For two discrete random variables, they are independent if P(x,y) = P(x)P(y)

42 Choosing the line that fits best
The question is: Which straight line fits best? y w w w w w w w w w w w w w w w x

43 Choosing the line that fits best
The best line is the one that minimizes the sum of squared vertical differences between the points and the line. Sum of squared differences = (2 - 1)2 + (4 - 2)2 + ( )2 + ( )2 = 6.89 Sum of squared differences = (2 -2.5)2 + ( )2 + ( )2 + ( )2 = 3.99 Let us compare two lines 4 (2,4) w The second line is horizontal w (4,3.2) 3 2.5 The smaller the sum of squared differences the better the fit of the line to the data. That is, the line with the least sum of squares (of differences) will fit the line best. 2 w (1,2) (3,1.5) w 1 2 3 4

44 Choosing the line that fits best Ordinary Least Squares (OLS) Principle
Straight lines can be described generally by Y = b0 + b1X Finding the best line with smallest sum of squared difference is the same as Min S(b0,b1) = S[yi – (b0 + b1xi)]2 Let b0* and b1* be the solution of the above problem. Y* = b0* + b1*X is known as the “average predicted value” (or simply “predicted value”) of y for any X.

45 Coefficient estimates from the ordinary least squares (OLS) principle
Solving the minimization problem implies the first order conditions: S(b0,b1) = S[yi – (b0 + b1xi)]2 ∂S(b0,b1) /∂ b0= S (2)[yi – (b0 + b1xi)](-1) = 0 S [yi – (b0 + b1xi)] = 0 S ei = 0 ∂S(b0,b1) /∂ b1= S (2)[yi – (b0 + b1xi)](-xi) = 0 S [yi – (b0 + b1xi)](xi) = 0 S ei xi = 0 The same as what we had earlier in the discussion of method of moments.

46 The consistency of b0 and b1 as estimators of b0 and b1
Same equations implies same estimator: b1 = Sxy / Sxx and b0 = my – b1 mx The estimators are consistent as long as Sxy, Sxx, my and mx are consistent estimators of the corresponding population quantities. Sxy converges to Cov(x,y) as sample size increases. Sxx converges to Var(x) as sample size increases. my converges to E(y) as sample size increases. mx converges to E(x) as sample size increases

47 The b0 and b1 are linear in yi
Similarly, it can be shown that b0 is also a linear combination of yi.

48 The unbiasedness of b0 and b1 as estimators of b0 and b1
The estimator. b1 = S ci yi E(b1|x) = S ci E(yi) = S ci E(b0+b1xi+ei) = b0 S ci +b1 S ci xi+ S ci E(ei) = b0 0 +b = b1

49 Best estimator It can also be shown that the estimators b0 and b1 have the smallest variance among all unbiased estimators that are linear combination of yi. That is, if there is another unbiased estimator b1’ that is linear combination of yi, say, b1’ = S ci’ yi, we must have Var(b1)  Var(b1’)

50 BLUE Best: smallest variance Linear: linear combination of yi
Unbiased: E(b0) = b0, E(b1) = b1 Estimator

51 Interpretation of Coefficients
yi = b0 + b1xi + ei Slope (b1) Estimated Y changes by b1 for each 1 unit increase in X y* + Dy*= b0 + b1(x+1) Dy*= b1 More generally, y* + Dy*= b0 + b1(x+Dx) Dy*/Dx = b1 Y-Intercept (b0 ) Estimated value of Y when X = 0

52 EXAMPLE 2 continued from Example 1
Develop a regression equation for the information given in EXAMPLE 1. The information there can be used to estimate the selling price based on the number of pages. The regression equation is: Y* = X The equation crosses the Y-axis at $48. A book with no pages would cost $48. The slope of the line is Each additional page costs about $0.05 or five cents. Note: the sign of the b value and the sign of r will always be the same.

53 Result part 1 b1 Sb1 Coefficients Standard Error t Stat P-value
Coefficients Standard Error t Stat P-value Intercept 48 X = / =(b1-0)/Sb1 b1 Sb1 H0: b1 = 0 Pr(t>t Stat)

54 Example 2 continued from Example 1
We can use the regression equation to estimate values of Y. The estimated selling price of an 800 page book is $89.14, found by Y* = X = (800) = 89.14

55 Standard Error of Estimate (denoted se or Sy.x)
Additional assumption: Var(ei) = Var(ej) = Var(e) = se2 for all i,j Recall yi=b0 + b1 xi+ ei Var(e) = 0 implies y are perfectly linear with x.

56 Scatter Around the Regression Line
Small Var(e) Large Var(e) More Accurate Estimator of X, Y Relationship Less Accurate Estimator of X, Y Relationship

57 Standard Error of Estimate (denoted se or Sy.x)
Additional assumption: Var(ei) = Var(ej) = se2 for all i,j. Se2 is a estimate of se2. Hence, it may be interpreted as a measure of the reliability of the estimating equation A measure of dispersion Measures the variability, or scatter of the observed values around the regression line

58 Interpreting the Standard Error of the Estimate
Assumptions: Observed Y values are normally distributed around each estimated value of Y* Constant variance se measures the dispersion of the points around the regression line If se = 0, equation is a “perfect” estimator se may be used to compute confidence intervals of the estimated value

59 Variation of Errors Around the Regression Line
y values are normally distributed around the regression line. For each x value, the “spread” or variance around the regression line is the same. f(e) Y X2 X1 X Regression Line

60 Variation of Errors Around the Regression Line
X2 Y X E(y|x)=b0+b1x y is distributed normal with mean E(y|x)=b0+b1x, and variance s2. Confidence intervals may be constructed in the usual fashion.

61 Scatter around the Regression Line
Dependent Variable ( Y) Independent Variable (X) Y = b0 + b1X + 2se Y = b0 + b1X + 1se Y = b0 + b1X - 1se Y = b0 + b1X - 2se Y = b0 + b1X regression line 2se (95.5% Lie in this Region) 1se (68% Lie in this Region)

62 Example 3 continued from Example 1 and 2.
Find the standard error of estimate for the problem involving the number of pages in a book and the selling price.

63 Equations for the Interval Estimates
Confidence Interval for the Mean of y y*=b0 + b1 x Prediction Interval for y y=b0 + b1 x + e Minimized when

64 Confidence Interval Estimate for Mean Response
y* = b0+b1xi The following factors influence the width of the interval: Std Error, Sample Size, X Value X

65 Confidence Interval continued from Example 1, 2 and 3.
For books of 800 pages long, what is that 95% confidence interval for the mean price? This calls for a confidence interval on the average price of books of 800 pages long.

66 Prediction Interval continued from Example 1, 2 and 3.
For a book of 800 pages long, what is the 95% prediction interval for its price? This calls for a prediction interval on the price of an individual book of 800 pages long.

67 Test of Slope Coefficient (b1)
Tests if there is a linear relationship between X & Y Involves population slope b1 Hypotheses H0: b1 = 0 (no linear relationship) H1: b1 0 (linear relationship) Theoretical basis is sampling distribution of slopes

68 Sampling Distribution of the Least Squares Coefficient Estimator
If the standard least squares assumptions hold, then b1 is an unbiased estimator of 1 and has a population variance and an unbiased sample variance estimator

69 Basis for Inference About the Population Regression Slope
Let 1 be a population regression slope and b1 its least squares estimate based on n pairs of sample observations. Then, if the standard regression assumptions hold and it can also be assumed that the errors i are normally distributed, the random variable is distributed as Student’s t with (n – 2) degrees of freedom. In addition the central limit theorem enables us to conclude that this result is approximately valid for a wide range of non-normal distributions and large sample sizes, n.

70 Tests of the Population Regression Slope
If the regression errors i are normally distributed and the standard least squares assumptions hold (or if the distribution of b1 is approximately normal), the following tests have significance value : To test either null hypothesis H0: b1 = b1* or H0: b1  b1* against the alternative H1: b1 > b1* The decision rule is to reject if

71 Tests of the Population Regression Slope
To test either null hypothesis H0: b1 = b1* or H0: b1 > b1* against the alternative H1: b1  b1* the decision rule is to reject if

72 Tests of the Population Regression Slope
To test either null hypothesis H0: b1 = b1* against the alternative H1: b1  b1* the decision rule is to reject if Equivalently

73 Confidence Intervals for the Population Regression Slope 1
If the regression errors i , are normally distributed and the standard regression assumptions hold, a 100(1 - )% confidence interval for the population regression slope 1 is given by

74 Some cautions about the interpretation of significance tests
Rejecting H0: b1 = 0 and concluding that the relationship between x and y is significant does not enable us to conclude that a cause-and-effect relationship is present between x and y. Causation requires: Association Accurate time sequence Other explanation for correlation Correlation  Causation

75 Some cautions about the interpretation of significance tests
Just because we are able to reject H0: b1 = 0 and demonstrate statistical significance does not enable us to conclude that there is a linear relationship between x and y. Linear relationship is a very small subset of possible relationship among variables. A test of linear versus nonlinear relationship requires another batch of analysis.

76 Evaluating the Model yi* = b0 +b1xi Variation Measures
Coeff. Of Determination Standard Error of Estimate Test Coefficients for Significance yi* = b0 +b1xi

77 Y  Y X X Variation Measures SSE Yi SST SSR i
Unexplained Sum of Squares (Yi -Yi*)2 Y Yi SSE Total Sum of Squares (Yi - Y)2 yi* = b0 +b1xi SST Explained Sum of Squares (Yi* - Y)2 SSR Y X X i

78 Measures of Variation in Regression
Total Sum of Squares (SST) Measures variation of observed Yi around the mean,Y Explained Variation (SSR) Variation due to relationship between X & Y Unexplained Variation (SSE) Variation due to other factors SST=SSR+SSE

79 Variation in y (SST) = SSR + SSE
=0, as imposed in the estimation, E(ex)=0. SSE SSR

80 Variation in y (SST) = SSR + SSE
R2 (=r2, the coefficient of determination) measures the proportion of the variation in y that is explained by the variation in x. R2 takes on any value between zero and one. R2 = 1: Perfect match between the line and the data points. R2 = 0: There are no linear relationship between x and y.

81 Summarizing the Example’s results (Example 1, 2 and 3)
The estimated selling price for a book with 800 pages is $89.14. The standard error of estimate is $10.41. The 95 percent confidence interval for all books with 800 pages is $89.14 ± $ This means the limits are between $73.83 and $ The 95 percent prediction interval for a particular book with 800 pages is $89.14 ± $ The means the limits are between $59.42 and $ These results appear in the following output.

82 Example 3 continued Regression Analysis: Price versus Pages
The regression equation is Price = Pages Predictor Coef SE Coef T P Constant Pages S = R-Sq = 37.7% R-Sq(adj) = 27.3% Analysis of Variance Source DF SS MS F P Regression Residual Error Total

83 Testing for Linearity Key Argument:
If the value of y does not change linearly with the value of x, then using the mean value of y is the best predictor for the actual value of y. This implies is preferable. If the value of y does change linearly with the value of x, then using the regression model gives a better prediction for the value of y than using the mean of y. This implies y=y* is preferable.

84 Three Tests for Linearity
Testing the Coefficient of Correlation H0: r = 0 There is no linear relationship between x and y. H1: r ¹ 0 There is a linear relationship between x and y. Test Statistic: Testing the Slope of the Regression Line H0: b1 = 0 There is no linear relationship between x and y. H1: b1 ¹ 0 There is a linear relationship between x and y. Test Statistic:

85 Three Tests for Linearity
The Global F-test H0: There is no linear relationship between x and y. H1: There is a linear relationship between x and y. Test Statistic: [Variation in y] = SSR + SSE. Large F results from a large SSR. Then, much of the variation in y is explained by the regression model. The null hypothesis should be rejected; thus, the model is valid. Note: At the level of simple linear regression, the global F-test is equivalent to the t-test on b1. When we conduct regression analysis of multiple variables, the global F-test will take on a unique function.

86 Residual Analysis Purposes Examine Linearity
Evaluate violations of assumptions Graphical Analysis of Residuals Plot residuals versus Xi values Difference between actual Yi & predicted Yi* Studentized residuals: Allows consideration for the magnitude of the residuals

87 OK Residual Analysis for Linearity Not Linear Linear e e X X
For example, if truth is y = b0 + b1 x + b2x2 + e The estimated residuals are likely e=b2x2 + e

88 Residual Analysis for Homoscedasticity
When the requirement of a constant variance (homoscedasticity) is violated we have heteroscedasticity. Using Standardized Residuals (e/se) OK Heteroscedasticity Homoscedasticity SR SR X X For example, for xi>xj Var(ei|xi)>var(ej|xj)

89 Residual Analysis for Independence
Using Standardized Residuals (e/se) Not Independent OK Independent SR SR X X

90 Non-independence of error variables
A time series is constituted if data were collected over time. Examining the residuals over time, no pattern should be observed if the errors are independent. When a pattern is detected, the errors are said to be autocorrelated. Autocorrelation can be detected by graphing the residuals against time.

91 Patterns in the appearance of the residuals
over time indicates that autocorrelation exists. Residual Residual + + + + + + + + + + + + + + + Time Time + + + + + + + + + + + + + Note the runs of positive residuals, replaced by runs of negative residuals Note the oscillating behavior of the residuals around zero.

92 The Durbin-Watson Statistic
Used when data is collected over time to detect autocorrelation (Residuals in one time period are related to residuals in another period) Measures Violation of independence assumption Should be close to 2. If not, examine the model for autocorrelation. Intuition: If x and y are independent, Var(x-y)= Var(x) + Var(y)

93 Outliers An outlier is an observation that is unusually small or large. Several possibilities need to be investigated when an outlier is observed: There was an error in recording the value. The point does not belong in the sample. The observation is valid. Identify outliers from the scatter diagram. It is customary to suspect an observation is an outlier if its |standard residual| > 2

94 An influential observation
An outlier An influential observation + + + + + + + + + + + + + + … but, some outliers may be very influential + + + + + + + + + + + + + The outlier causes a shift in the regression line

95 Remedying violations of the required conditions
Nonnormality or heteroscedasticity can be remedied using transformations on the y variable. The transformations can improve the linear relationship between the dependent variable and the independent variables. Many computer software systems allow us to make the transformations easily.

96 A brief list of transformations
Y’ = log y (for y > 0) Use when the se increases with y, or Use when the error distribution is positively skewed y’ = y2 Use when the se2 is proportional to E(y), or Use when the error distribution is negatively skewed y’ = y1/2 (for y > 0) Use when the se2 is proportional to E(y) y’ = 1/y Use when se2 increases significantly when y increases beyond some value.

97 OK Example: Transformation to get linearity Not Linear Linear e X e X
Yi = b0 + b Xi + ei Yi = b0 + b1 Xi + b2 Xi2 + ei OK Not Linear Linear e X e X

98 Lesson 10: Regressions Part I - END -


Download ppt "Lesson 10: Regressions Part I."

Similar presentations


Ads by Google