Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model.

Similar presentations


Presentation on theme: "1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model."— Presentation transcript:

1 1 Chapter 12 Simple Linear Regression

2 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model Assumptions  Testing for Significance

3 3 Simple Linear Regression  Managerial decisions often are based on the relationship between two or more variables.  Regression analysis can be used to develop an equation showing how the variables are related.  The variable being predicted is called the dependent variable and is denoted by y.  The variables being used to predict the value of the dependent variable are called the independent variables and are denoted by x.

4 4 Simple Linear Regression  Simple linear regression involves one independent variable and one dependent variable.  The relationship between the two variables is approximated by a straight line (hence, the ‘linear’ regression).  Regression analysis involving two or more independent variable is called multiple regression (covered in the next chapter).

5 5 Simple Linear Regression Model  The equation that describes how y is related to x and an error term is called the regression model.  The simple linear regression model is: y =  0 +  1 x +  where:  0 and  1 are called parameters of the model,  is a random variable called the error term.

6 6 Simple Linear Regression Equation  The simple linear regression equation is: E ( y ) =  0 +  1 x Graph of the regression equation is a straight line. Graph of the regression equation is a straight line.  0 is the y intercept of the regression line.  1 is the slope of the regression line.  1 is the slope of the regression line. E(y) is the expected value of y for a given value of x. E(y) is the expected value of y for a given value of x. Please note that both  0 and  1 are population parameters, depicting the true relationship between y and x.

7 7 Simple Linear Regression  Example: Stock Market Risk The systematic risk (a common risk shared by all the stocks) of stock market has different impacts on different stocks. Stocks that are more sensitive to systematic risk are riskier. We can conduct a regression analysis to estimate the sensitivity of an individual stock to the systematic market risk. On the next slide are shown the data for a sample of 20 most recent quarterly returns of Netflix and the SPY (an index fund that keeps track of S&P 500). The systematic risk (a common risk shared by all the stocks) of stock market has different impacts on different stocks. Stocks that are more sensitive to systematic risk are riskier. We can conduct a regression analysis to estimate the sensitivity of an individual stock to the systematic market risk. On the next slide are shown the data for a sample of 20 most recent quarterly returns of Netflix and the SPY (an index fund that keeps track of S&P 500).

8 8 Simple Linear Regression  Example: Stock Market Risk (data)

9 9 Simple Linear Regression  Example: Stock Market Risk (Scatter Diagram) Trend Line

10 10 Simple Linear Regression  Example: Stock Market Risk From the scatter diagram, we observe the following: 1.The plots are scattered around, indicating the relationship between the returns of SPY and Netflix is not perfect. 2.The trend line has a positive slope, indicating that the relationship is positive, i.e. as the returns of SPY go up, the returns of Netflix tend to go up too. 3.The vertical distance between a plot to the trend line is the difference between the actual return of Netflix and its estimated value, given an actual return of SPY. The difference is simply the estimated error, similar to y – E(y). y – E(y).

11 11 Simple Linear Regression Equation  Positive Linear Relationship E(y)E(y)E(y)E(y) x Slope  1 is positive Regression line Intercept  0

12 12 Simple Linear Regression Equation  Negative Linear Relationship x E(y)E(y)E(y)E(y) x Slope  1 is negative Regression line Intercept  0

13 13 Simple Linear Regression Equation  No Relationship E(y)E(y)E(y)E(y) x Slope  1 is 0 Regression line Intercept  0

14 14 Estimated Simple Linear Regression Equation  The estimated simple linear regression equation: The graph is called the estimated regression line. The graph is called the estimated regression line. b 0 is the y intercept of the estimated regression line. b 1 is the slope of the estimated regression line. b 1 is the slope of the estimated regression line. is the estimated value of y for a given value of x. is the estimated value of y for a given value of x. Please note that b 0 and b 1 are sample estimates of  0 and  1, respectively, depicting the estimated sample relationship between y and x.

15 15 Estimation Process Regression Model y =  0 +  1 x +  Regression Equation E ( y ) =  0 +  1 x Unknown Parameters  0,  1 Sample Data: x y x 1 y 1...... x n y n b 0 and b 1 provide estimates of  0 and  1 Estimated Regression Equation Sample Statistics b 0, b 1

16 16 Least Squares Method  Least Squares Criterion where: y i = observed value of the dependent variable for the i th observation for the i th observation^ y i = estimated value of the dependent variable for the i th observation for the i th observation

17 17 Least Squares Method  Least Squares Criterion is the estimated error for the ith observation; Take the square of means that it is the magnitude of the error not the sign of it (positive or negative) that matters; The purpose of Least Squares Criterion is to find the b 0 and b 1 that minimize the sum of the square of estimated error for all the observations in the sample, i.e. the best- fit (with the smallest overall error) straight line that approximates the relationship between y and x.The purpose of Least Squares Criterion is to find the b 0 and b 1 that minimize the sum of the square of estimated error for all the observations in the sample, i.e. the best- fit (with the smallest overall error) straight line that approximates the relationship between y and x.

18 18 Least Squares Method  Slope for the Estimated Regression Equation where: x i = value of independent variable for i th observation observation_ y = average value of dependent variable _ x = average value of independent variable y i = value of dependent variable for i th observation observation

19 19 Least Squares Method  y-Intercept for the Estimated Regression Equation

20 20 Simple Linear Regression  Example: Stock Market Risk Quarterly Returns of SPY ( x ) Quarterly Returns Quarterly Returns of Netflix ( y ) 0.0630 0.1366 0.0470 0.0191 0.2537 -0.0302 0.3190 0.2693  x = 0.9143  y = 4.0419

21 21 Estimated Regression Equation  Slope for the Estimated Regression Equation  y-Intercept for the Estimated Regression Equation  Estimated Regression Equation

22 22 Estimated Regression Line – Stock Market Risk Example

23 23 Coefficient of Determination where: SST = total sum of squares (i.e. total variability of y ) SSR = sum of squares due to regression (i.e. the variability of y that is explained by regression) SSE = sum of squares due to error (i.e. the variability of y that cannot be explained by regression) SST = SSR + SSE Relationship Among SST, SSR, SSE

24 24 Coefficient of Determination The coefficient of determination is: r 2 represents the percentage of total variability of y that is explained by regression. r 2 = SSR/SST

25 25 Coefficient of Determination r 2 = SSR/SST = 0.404/2.741 = 0.147 The regression relationship is actually weak. Only 14.7% of the variability in the returns of Netflix can be explained by the linear relationship between the market returns (SPY) and the returns of Netflix.

26 26 Sample Correlation Coefficient where: b 1 = the slope of the estimated regression equation

27 27 Sample Correlation Coefficient The sign of b 1 in the equation is “+”.

28 28 Assumptions About the Error Term  y =  0 +  1 x +  1. The error  is a random variable with mean of zero. 2. The variance of , denoted by  2, is the same for all values of the independent variable. 2. The variance of , denoted by  2, is the same for all values of the independent variable. 3. The values of  are independent. 4. The error  is a normally distributed random variable. 4. The error  is a normally distributed random variable.

29 29 Test for Significance To test for a significant regression relationship, we must conduct a hypothesis test to determine whether the value of  1 (slop) is zero. To test for a significant regression relationship, we must conduct a hypothesis test to determine whether the value of  1 (slop) is zero. Two tests are commonly used: t Test and F Test Both the t test and F test require an estimate of  2, the variance of  in the regression model. Both the t test and F test require an estimate of  2, the variance of  in the regression model. y =  0 +  1 x +   1 determines the relationship between y and x.

30 30 Test for Significance An Estimate of  2 where: s 2 = MSE = SSE/( n  2) The mean square error (MSE) provides the estimate (the sample variance s 2 ) of  2.

31 31 Test for Significance  An Estimate of  To estimate  we take the square root of s 2. The resulting s is called the standard error of the estimate.

32 32 Test for Significance: t Test Hypotheses Test Statistic where

33 33 Test for Significance: t Test where: t  is based on a t distribution with n - 2 degrees of freedom n is the number of observations in the regression; 2 is the number of parameters (  0 &  1 ) in the regression. Reject H 0 if p -value <  or t t  Rejection Rule

34 34 Test for Significance: t Test 1. Determine the hypotheses. 2. Specify the level of significance. 3. Calculate the test statistic.  =.05

35 35 Test for Significance: t Test 4. Determine whether to reject H 0. p-Value approach t = 1.76 provides an area of.0473 in the upper tail. Hence, the p -value is 2*0.0473 = 0.0946. Since p -value is larger than 0.05, we will not reject H 0. Critical Value approach For  =5%, the critical value is 2.1 (a two-tailed test). Since our test statistic t = 1.76, which is less than 2.1, we will not reject H 0.

36 36 Confidence Interval for  1 H 0 is rejected if the hypothesized value of  1 is not included in the confidence interval for  1. We can use a 95% confidence interval for  1 to test the hypotheses just used in the t test.

37 37 Confidence Interval for  1 The form of a confidence interval for  1 is:The form of a confidence interval for  1 is: where is the t value providing an area of  /2 in the upper tail of a t distribution with n - 2 degrees of freedom b 1 is the pointestimator t  /2 s b1 is the margin of error

38 38 Confidence Interval for  1 Reject H 0 if 0 is not included in the confidence interval for  1. 0 is included in the confidence interval. Do Not Reject H 0 = 2.87 ± 2.1(1.63) = 2.87 ± 3.42 or -0.55 to 6.29  Rejection Rule  95% Confidence Interval for  1  Conclusion

39 39 Test for Significance: F Test F = MSR/MSE Hypotheses Test Statistic Please note that the hypotheses of the F test are the same as the ones of the t test, which is always the case for a Simple Linear Regression (where there is only one independent variable.)

40 40 Test for Significance: F Test Rejection Rule Reject H 0 if p -value <  p -value <  or F > F  where: F  is based on an F distribution with 1 degree of freedom in the numerator and n - 2 degrees of freedom in the denominator

41 41 ANOVA Table for A Regression Analysis Source of Variation Sum of Squares Degrees of Freedom MeanSquare F Regression Error Total k - 1 n T - 1 SSR SSE SST n T - k p - Value k is the number of parameters in a regression. n t is the number of observations.

42 42 ANOVA Table for A Regression Analysis Source of Variation Sum of Squares Degrees of Freedom MeanSquare F Regression Error Total 1 19 0.404 2.337 2.741 18 p - Value 0.404 0.13 3.110.095 Stock Market Risk Example -

43 43 Test for Significance: F Test 1. Determine the hypotheses. 2. Specify the level of significance. 3. Calculate the test statistic.  =.05 F = MSR/MSE F = MSR/MSE = 0.404/0.13 = 3.11 The relationship between the F value and the t value is F = t 2, which is only true for simple linear regressions.

44 44 Test for Significance: F Test 4. Determine whether to reject H 0. p-Value approach F = 3.11 provides an area of.0946 in the upper tail. Hence, the p -value is 0.0946. Since p -value is larger than 0.05, we will not reject H 0. Critical Value approach For  =5%, the critical value is 4.41. Since our test statistic F = 3.11, which is less than 4.41, we will not reject H 0.

45 45 Some Cautions about the Interpretation of Significance Tests Just because we are able to reject H 0 :  1 = 0 and demonstrate statistical significance does not enable us to conclude that there is a linear relationship between x and y. Rejecting H 0 :  1 = 0 and concluding that the relationship between x and y is significant does not enable us to conclude that a cause-and-effect relationship is present between x and y.


Download ppt "1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model."

Similar presentations


Ads by Google