Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Simple Linear Regression and Correlation Chapter 17.

Similar presentations


Presentation on theme: "1 Simple Linear Regression and Correlation Chapter 17."— Presentation transcript:

1 1 Simple Linear Regression and Correlation Chapter 17

2 2 17.1 Introduction In this chapter we employ Regression Analysis to examine the relationship among quantitative variables. The technique is used to predict the value of one variable (the dependent variable - y)based on the value of other variables (independent variables x 1, x 2,…x k.)

3 3 17.2 The Model The first order linear model y = dependent variable x = independent variable  0 = y-intercept  1 = slope of the line = error variable x y 00 Run Rise   = Rise/Run  0 and  1 are unknown, therefore, are estimated from the data.

4 4 17.3 Estimating the Coefficients The estimates are determined by –drawing a sample from the population of interest, –calculating sample statistics. –producing a straight line that cuts into the data.           The question is: Which straight line fits best? x y

5 5 3 3 The best line is the one that minimizes the sum of squared vertical differences between the points and the line.     4 1 1 4 (1,2) 2 2 (2,4) (3,1.5) Sum of squared differences =(2 - 1) 2 +(4 - 2) 2 +(1.5 - 3) 2 + (4,3.2) (3.2 - 4) 2 = 6.89 Sum of squared differences =(2 -2.5) 2 +(4 - 2.5) 2 +(1.5 - 2.5) 2 +(3.2 - 2.5) 2 = 3.99 2.5 Let us compare two lines The second line is horizontal The smaller the sum of squared differences the better the fit of the line to the data.

6 6 To calculate the estimates of the coefficients that minimize the differences between the data points and the line, use the formulas: The regression equation that estimates the equation of the first order linear model is:

7 7 Example 17.1 Relationship between odometer reading and a used car’s selling price. –A car dealer wants to find the relationship between the odometer reading and the selling price of used cars. –A random sample of 100 cars is selected, and the data recorded. –Find the regression line. Independent variable x Dependent variable y

8 8 Solution –Solving by hand To calculate b 0 and b 1 we need to calculate several statistics first; where n = 100.

9 9 –Using the computer (see file Xm17-01.xls) Tools > Data analysis > Regression > [Shade the y range and the x range] > OK

10 10 This is the slope of the line. For each additional mile on the odometer, the price decreases by an average of $0.0312 The intercept is b 0 = 6533. 6533 0 No data Do not interpret the intercept as the “Price of cars that have not been driven”

11 11 17.4 Error Variable: Required Conditions The error  is a critical part of the regression model. Four requirements involving the distribution of  must be satisfied. –The probability distribution of  is normal. –The mean of  is zero: E(  ) = 0. –The standard deviation of  is   for all values of x. –The set of errors associated with different values of y are all independent.

12 12 From the first three assumptions we have: y is normally distributed with mean E(y) =  0 +  1 x, and a constant standard deviation   From the first three assumptions we have: y is normally distributed with mean E(y) =  0 +  1 x, and a constant standard deviation     0 +  1 x 1  0 +  1 x 2  0 +  1 x 3 E(y|x 2 ) E(y|x 3 ) x1x1 x2x2 x3x3  E(y|x 1 )  The standard deviation remains constant, but the mean value changes with x

13 13 17.5 Assessing the Model The least squares method will produce a regression line whether or not there is a linear relationship between x and y. Consequently, it is important to assess how well the linear model fits the data. Several methods are used to assess the model: –Testing and/or estimating the coefficients. –Using descriptive measurements.

14 14 –This is the sum of differences between the points and the regression line. –It can serve as a measure of how well the line fits the data. –This statistic plays a role in every statistical technique we employ to assess the model. Sum of squares for errors

15 15 –The mean error is equal to zero. –If   is small the errors tend to be close to zero (close to the mean error). Then, the model fits the data well. –Therefore, we can, use   as a measure of the suitability of using a linear model. –An unbiased estimator of   2 is given by s  2 Standard error of estimate

16 16 Example 17.2 –Calculate the standard error of estimate for example 17.1, and describe what does it tell you about the model fit? Solution Calculated before It is hard to assess the model based on s  even when compared with the mean value of y.

17 17                       Testing the slope –When no linear relationship exists between two variables, the regression line should be horizontal.                                                                                           Linear relationship. Different inputs (x) yield different outputs (y). No linear relationship. Different inputs (x) yield the same output (y). The slope is not equal to zeroThe slope is equal to zero

18 18 We can draw inference about  1 from b 1 by testing H 0 :  1 = 0 H 1 :  1 = 0 (or 0) –The test statistic is –If the error variable is normally distributed, the statistic is Student t distribution with d.f. = n-2. The standard error of b 1. where

19 19 Solution –Solving by hand –To compute “t” we need the values of b 1 and s b1. –Using the computer There is overwhelming evidence to infer that the odometer reading affects the auction selling price.

20 20 Coefficient of determination –When we want to measure the strength of the linear relationship, we use the coefficient of determination.

21 21 –To understand the significance of this coefficient note: Overall variability in y The regression model Remains, in part, unexplained The error Explained in part by

22 22 x1x1 x2x2 y1y1 y2y2 y Two data points (x 1,y 1 ) and (x 2,y 2 ) of a certain sample are shown. Total variation in y = Variation explained by the regression line) + Unexplained variation (error)

23 23 R 2 measures the proportion of the variation in y that is explained by the variation in x. Variation in y = SSR + SSE R 2 takes on any value between zero and one. R 2 = 1: Perfect match between the line and the data points. R 2 = 0: There are no linear relationship between x and y.

24 24 Example 17.4 –Find the coefficient of determination for example 17.1; what does this statistic tell you about the model? Solution –Solving by hand; –Using the computer From the regression output we have 65% of the variation in the auction selling price is explained by the variation in odometer reading. The rest (35%) remains unexplained by this model.

25 25 17.6 Finance Application: Market Model One of the most important applications of linear regression is the market model. It is assumed that rate of return on a stock (R) is linearly related to the rate of return on the overall market. R =  0 +  1 R m +  Rate of return on a particular stockRate of return on some major stock index The beta coefficient measures how sensitive the stock’s rate of return is to changes in the level of the overall market.

26 26 Example 17.5 The market model Estimate the market model for Nortel, a stock traded in the Toronto Stock Exchange. Data consisted of monthly percentage return for Nortel and monthly percentage return for all the stocks. This is a measure of the stock’s market related risk. In this sample, for each 1% increase in the TSE return, the average increase in Nortel’s return is.8877%. This is a measure of the total risk embedded in the Nortel stock, that is market-related. Specifically, 31.37% of the variation in Nortel’s return are explained by the variation in the TSE’s returns.

27 27 17.7 Using the Regression Equation If we are satisfied with how well the model fits the data, we can use it to make predictions for y. Illustration –Predict the selling price of a three-year-old Taurus with 40,000 miles on the odometer (Example 17.1). Before using the regression model, we need to assess how well it fits the data. If we are satisfied with how well the model fits the data, we can use it to make predictions for y. Illustration –Predict the selling price of a three-year-old Taurus with 40,000 miles on the odometer (Example 17.1).

28 28 Prediction interval and confidence interval –Two intervals can be used to discover how closely the predicted value will match the true value of y. Prediction interval - for a particular value of y, Confidence interval - for the expected value of y. –The confidence interval –The prediction interval The prediction interval is wider than the confidence interval

29 29 Example 17.6 interval estimates for the car auction price –Provide an interval estimate for the bidding price on a Ford Taurus with 40,000 miles on the odometer. –Solution The dealer would like to predict the price of a single car The prediction interval(95%) = t.025,98

30 30 –The car dealer wants to bid on a lot of 250 Ford Tauruses, where each car has been driven for about 40,000 miles. –Solution The dealer needs to estimate the mean price per car. The confidence interval (95%) =

31 31 The effect of the given value of x on the interval –As x g moves away from x the interval becomes longer. That is, the shortest interval is found at x. The confidence interval when x g = The confidence interval when x g = The confidence interval when x g =

32 32 17.8 Coefficient of correlation The coefficient of correlation is used to measure the strength of association between two variables. The coefficient values range between -1 and 1. –If r = -1 (negative association) or r = +1 (positive association) every point falls on the regression line. –If r = 0 there is no linear pattern. The coefficient can be used to test for linear relationship between two variables.

33 33 Testing the coefficient of correlation –When there are no linear relationship between two variables,  = 0. –The hypotheses are: H 0 :  = 0 H 1 :  = 0 –The test statistic is: The statistic is Student t distributed with d.f. = n - 2, provided the variables are bivariate normally distributed. X Y

34 34 Example 17.7 Testing for linear relationship –Test the coefficient of correlation to determine if linear relationship exists in the data of example 17.1. Solution –We test H 0 :  = 0 H 1 :  0. –Solving by hand: The rejection region is |t| > t  /2,n-2 = t.025,98 = 1.984 or so. The sample coefficient of correlation r=cov(X,Y)/s x s y =-.806 The value of the t statistic is Conclusion: There is sufficient evidence at  = 5% to infer that there are linear relationship between the two variables.

35 35 Spearman rank correlation coefficient –The Spearman rank test is used to test whether relationship exists between variables in cases where at least one variable is ranked, or both variables are quantitative but the normality requirement is not satisfied

36 36 –The hypotheses are: H 0 :  s = 0 H 1 :  s = 0 –The test statistic is a and b are the ranks of the data. –For a large sample (n > 30) r s is approximately normally distributed

37 37 Example 17.8 –A production manager wants to examine the relationship between aptitude test score given prior to hiring, and performance rating three months after starting work. –A random sample of 20 production workers was selected. The test scores as well as performance rating was recorded.

38 38 Solution –The problem objective is to analyze the relationship between two variables. –Performance rating is ranked. –The hypotheses are: H 0 :  s = 0 H 1 :  s = 0 –The test statistic is r s, and the rejection region is |r s | > r critical (taken from the Spearman rank correlation table). Scores range from 0 to 100Scores range from 1 to 5

39 39 Ties are broken by averaging the ranks. –Solving by hand Rank each variable separately. Calculate s a = 5.92; s b =5.50; cov(a,b) = 12.34 Thus r s = cov(a,b)/[s a s b ] =.379. The critical value for  =.05 and n = 20 is.450. Conclusion: Do not reject the null hypothesis. At 5% significance level there is insufficient evidence to infer that the two variable are related to one another.

40 40 17.9 Regression Diagnostics - I The three conditions required for the validity of the regression analysis are: –the error variable is normally distributed. –the error variance is constant for all values of x. –The errors are independent of each other. How can we diagnose violations of these conditions?

41 41 Residual Analysis –Examining the residuals (or standardized residuals), we can identify violations of the required conditions –Example 17.1 - continued Nonnormality. –Use Excel to obtain the standardized residual histogram. –Examine the histogram and look for a bell shaped diagram with mean close to zero.

42 42 For each residual we calculate the standard deviation as follows: A Partial list of Standard residuals Standardized residual i = Residual i / Standard deviation We can also apply the Lilliefors test or the  2 test of normality.

43 43 Heteroscedasticity –When the requirement of a constant variance is violated we have heteroscedasticity. + + + + + + + + + + + + + + + + + + + + + + + + The spread increases with y ^ y ^ Residual ^ y + + + + + + + + + + + + + + + + + + + + + + +

44 44 + + + + + + + + + + + + + + + + + + + + + + + + y ^ Residual ^ y + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + The spread of the data points does not change much. When the requirement of a constant variance is not violated we have homoscedasticity.

45 45 + + + + + + + + + + + + + + + + + + + + + + + + y ^ Residual ^ y + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + As far as the even spread, this is a much better situation When the requirement of a constant variance is not violated we have homoscedasticity.

46 46 Nonindependence of error variables – A time series is constituted if data were collected over time. –Examining the residuals over time, no pattern should be observed if the errors are independent. –When a pattern is detected, the errors are said to be autocorrelated. –Autocorrelation can be detected by graphing the residuals against time.

47 47 + + + + + + + + + + + + + + + + + + + + + + + + + Time Residual Time + + + Patterns in the appearance of the residuals over time indicates that autocorrelation exists. Note the runs of positive residuals, replaced by runs of negative residuals Note the oscillating behavior of the residuals around zero. 00

48 48 Outliers –An outlier is an observation that is unusually small or large. –Several possibilities need to be investigated when an outlier is observed: There was an error in recording the value. The point does not belong in the sample. The observation is valid. –Identify outliers from the scatter diagram. –It is customary to suspect an observation is an outlier if its |standard residual| > 2

49 49 + + + + + + + + + + + + + + + + + The outlier causes a shift in the regression line … but, some outliers may be very influential ++++++++++ An outlier An influential observation

50 50 Procedure for regression diagnostics –Develop a model that has a theoretical basis. –Gather data for the two variables in the model. –Draw the scatter diagram to determine whether a linear model appears to be appropriate. –Check the required conditions for the errors. –Assess the model fit. –If the model fits the data, use the regression equation.


Download ppt "1 Simple Linear Regression and Correlation Chapter 17."

Similar presentations


Ads by Google