Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 14 Inference for Regression © 2011 Pearson Education, Inc. 1 Business Statistics: A First Course.

Similar presentations


Presentation on theme: "Chapter 14 Inference for Regression © 2011 Pearson Education, Inc. 1 Business Statistics: A First Course."— Presentation transcript:

1 Chapter 14 Inference for Regression © 2011 Pearson Education, Inc. 1 Business Statistics: A First Course

2 2 14.1 The Population and the Sample  We already know that we can model the relationship between two quantitative variables by fitting a straight line to a sample of ordered pairs.  But, observations differ from sample to sample: © 2011 Pearson Education, Inc.

3 3 14.1 The Population and the Sample  We can imagine a line that summarizes the true relationship between x and y for the entire population, where  y is the population mean of y at a given value of x. NOTE: We are assuming an idealized case in which the the points ( x,  y ) are in fact exactly linear. © 2011 Pearson Education, Inc.

4 4 14.1 The Population and the Sample  For a given value x :  The value of ŷ for a specific value of x obtained from a particular sample may not lie on the line µ y.  These values of ŷ will be distributed about µ y.  We can account for the error between ŷ and µ y by adding an error term (  ) to the model: © 2011 Pearson Education, Inc.

5 5 14.1 The Population and the Sample  Regression Inference  Collect a sample and estimate the population  ’s by finding a regression line:  The residuals e = y – ŷ are the sample based versions of .  Account for the uncertainties in  0 and  1 by making confidence intervals, as we’ve done for means and proportions. © 2011 Pearson Education, Inc.

6 6 14.2 Assumptions and Conditions  Inference in regression are based on these assumptions (should check these assumptions in this order): 1. Linearity Assumption 2. Independence Assumption 3. Equal Variance Assumption 4. Normal Population Assumption © 2011 Pearson Education, Inc.

7 7 14.2 Assumptions and Conditions Testing the Assumptions 1.Make a scatterplot of the data to check for linearity. (Linearity Assumption) 2.Fit a regression and find the residuals, e, and predicted values ŷ. 3.Plot the residuals against time (if appropriate) and check for evidence of patterns (Independence Assumption). 4.Make a scatterplot of the residuals against x or the predicted values. This plot should not exhibit a “fan” or “cone” shape. (Equal Variance Assumption) © 2011 Pearson Education, Inc.

8 8 14.2 Assumptions and Conditions 5. Make a histogram and/or Normal probability plot of the residuals (Normal Population Assumption) © 2011 Pearson Education, Inc.

9 9 14.2 Assumptions and Conditions Graphical Summary of Assumptions and Conditions © 2011 Pearson Education, Inc.

10 10 14.3 Regression Inference For a sample, we expect b 1 to be close to the model slope  1. For similar samples, the standard error of the slope is a measure of the variability of b 1 about the true slope  1. © 2011 Pearson Education, Inc.

11 11 14.3 Regression Inference Which of these scatterplots would give the more consistent regression slope estimate if we were to sample repeatedly from the underlying population? Hint: Compare s e ’s. © 2011 Pearson Education, Inc.

12 12 14.3 Regression Inference Which of these scatterplots would give the more consistent regression slope estimate if we were to sample repeatedly from the underlying population? Hint: Compare s x ’s. © 2011 Pearson Education, Inc.

13 13 14.3 Regression Inference Which of these scatterplots would give the more consistent regression slope estimate if we were to sample repeatedly from the underlying population? Hint: Compare n’s. © 2011 Pearson Education, Inc.

14 14 14.3 Regression Inference © 2011 Pearson Education, Inc.

15 15 14.3 Regression Inference © 2011 Pearson Education, Inc.

16 16 14.3 Regression Inference © 2011 Pearson Education, Inc.

17 17 14.4 Standard Errors for Predicted Values SE becomes larger the further x gets from. That is, the confidence interval broadens as you move away from. (See figure at right.) © 2011 Pearson Education, Inc.

18 18 14.4 Standard Errors for Predicted Values SE, and the confidence interval, becomes smaller with increasing n. SE, and the confidence interval, are larger for samples with more spread around the line (when s e is larger). © 2011 Pearson Education, Inc.

19 19 14.4 Standard Errors for Predicted Values Because of the extra term, the prediction interval for individual values is broader that the confidence interval for predicted mean values. © 2011 Pearson Education, Inc.

20 20 14.5 Using Confidence and Prediction Intervals Confidence interval for a mean: The result at 95% means “We are 95% confident that the mean value of y is between 4.40 and 4.70 when.” © 2011 Pearson Education, Inc.

21 21 14.5 Using Confidence and Prediction Intervals Prediction interval for an individual value: The result at 95% means “We are 95% confident that a single particular value of y will be between 2.95 and 5.15 when.” © 2011 Pearson Education, Inc.

22 22 14.6 Extrapolation and Prediction Extrapolating – predicting a y value by extending the regression model to regions outside the range of the x -values of the data. © 2011 Pearson Education, Inc.

23 23 14.6 Extrapolation and Prediction Why is extrapolation dangerous?  It introduces the questionable and untested assumption that the relationship between x and y does not change. © 2011 Pearson Education, Inc.

24 24 14.6 Extrapolation and Prediction Cautionary Example: Oil Prices in Constant Dollars Model Prediction (Extrapolation): On average, a barrel of oil will increase $7.39 per year from 1983 to 1998. © 2011 Pearson Education, Inc.

25 25 14.6 Extrapolation and Prediction Cautionary Example: Oil Prices in Constant Dollars Actual Price Behavior Extrapolating the 1971-1982 model to the ’80s and ’90s lead to grossly erroneous forecasts. © 2011 Pearson Education, Inc.

26 26 14.6 Extrapolation and Prediction Remember: Linear models ought not be trusted beyond the span of the x -values of the data. If you extrapolate far into the future, be prepared for the actual values to be (possibly quite) different from your predictions. © 2011 Pearson Education, Inc.

27 27 In regression, an outlier can stand out in two ways. It can have… 1) a large residual: 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

28 28 In regression, an outlier can stand out in two ways. It can have… 2) a large distance from : “High-leverage point” A high leverage point is influential if omitting it gives a regression model with a very different slope. 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

29 29 Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential. 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

30 30 Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential.  Not high-leverage  Large residual  Not very influential 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

31 31 Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential. 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

32 32 Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential.  High-leverage  Small residual  Not very influential 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

33 33 Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential. 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

34 34 Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential.  High-leverage  Medium residual  Very influential (omitting the red point will change the slope dramatically!) 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

35 35 What should you do with a high-leverage point?  Sometimes, these points are important. They can indicate that the underlying relationship is in fact nonlinear.  Other times, they simply do not belong with the rest of the data and ought to be omitted. When in doubt, create and report two models: one with the outlier and one without. 14.7 Unusual and Extraordinary Observations © 2011 Pearson Education, Inc.

36 36 What Have We Learned?  Do not fit a linear regression to data that are not straight.  Watch out for changing spread.  Watch out for non-Normal errors.  Beware of extrapolating, especially far into the future.  Look for unusual points. Consider setting aside outliers and re-running the regression.  Treat unusual points honestly. © 2011 Pearson Education, Inc.

37 37 What Have We Learned?  Under certain conditions, the sampling distribution for the slope of a regression line can be modeled by a Student’s t- model with n – 2 degrees of freedom.  Check four conditions – in order – before proceeding to inference.  Linearity  Independence  Equal Variance  Normality © 2011 Pearson Education, Inc.

38 38 What Have We Learned?  Use the appropriate t-model to test a hypothesis (H 0 :  1 = 0) about the slope.  Create and interpret a confidence interval for the slope. © 2011 Pearson Education, Inc.


Download ppt "Chapter 14 Inference for Regression © 2011 Pearson Education, Inc. 1 Business Statistics: A First Course."

Similar presentations


Ads by Google