Presentation is loading. Please wait.

Presentation is loading. Please wait.

STA291 Statistical Methods Lecture 28. Extrapolation and Prediction Extrapolating – predicting a y value by extending the regression model to regions.

Similar presentations


Presentation on theme: "STA291 Statistical Methods Lecture 28. Extrapolation and Prediction Extrapolating – predicting a y value by extending the regression model to regions."— Presentation transcript:

1 STA291 Statistical Methods Lecture 28

2 Extrapolation and Prediction Extrapolating – predicting a y value by extending the regression model to regions outside the range of the x -values of the data.

3 Why is extrapolation dangerous?  It introduces the questionable and untested assumption that the relationship between x and y does not change. Extrapolation and Prediction

4 Cautionary Example: Oil Prices in Constant Dollars Model Prediction (Extrapolation): On average, a barrel of oil will increase $7.39 per year from 1983 to Price = – Time Extrapolation and Prediction

5 Cautionary Example: Oil Prices in Constant Dollars Actual Price Behavior Extrapolating the model to the ’80s and ’90s lead to grossly erroneous forecasts. Extrapolation and Prediction Price = – Time

6 Remember: Linear models ought not be trusted beyond the span of the x -values of the data. If you extrapolate far into the future, be prepared for the actual values to be (possibly quite) different from your predictions. Extrapolation and Prediction

7 Unusual and Extraordinary Observations Outliers, Leverage, and Influence In regression, an outlier can stand out in two ways. It can have… 1) a large residual:

8 Outliers, Leverage, and Influence In regression, an outlier can stand out in two ways. It can have… 2) a large distance from : “High-leverage point” A high leverage point is influential if omitting it gives a regression model with a very different slope. Unusual and Extraordinary Observations

9 Outliers, Leverage, and Influence Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential. Unusual and Extraordinary Observations  Not high-leverage  Large residual  Not very influential

10 Outliers, Leverage, and Influence Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential.  High-leverage  Small residual  Not very influential Unusual and Extraordinary Observations

11 Outliers, Leverage, and Influence Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential.  High-leverage  Medium (large?) residual  Very influential (omitting the red point will change the slope dramatically!) Unusual and Extraordinary Observations

12 Outliers, Leverage, and Influence What should you do with a high-leverage point?  Sometimes, these points are important. They can indicate that the underlying relationship is in fact nonlinear.  Other times, they simply do not belong with the rest of the data and ought to be omitted. When in doubt, create and report two models: one with the outlier and one without. Unusual and Extraordinary Observations

13 Example: Hard Drive Prices Prices for external hard drives are linearly associated with the Capacity (in GB). The least squares regression line without a 200 GB drive that sold for $ was found to be. The regression equation with the original data is How are the two equations different? The intercepts are different, but the slopes are similar. Does the new point have a large residual? Explain. Yes. The hard drive’s price doesn’t fit the pattern since it pulled the line up but didn’t decrease the slope very much. Unusual and Extraordinary Observations

14 Working with Summary Values Scatterplots of summarized (averaged) data tend to show less variability than the un-summarized data. Example: Wind speeds at two locations, collected at 6AM, noon, 6PM, and midnight. Raw data: R 2 = Daily-averaged data: R 2 = Monthly-averaged data: R 2 = 0.942

15 WARNING: Be suspicious of conclusions based on regressions of summary data. Regressions based on summary data may look better than they really are! In particular, the strength of the correlation will be misleading. Working with Summary Values

16 Autocorrelation Time-series data are sometimes autocorrelated, meaning points near each other in time will be related. First-order autocorrelation: Adjacent measurements are related Second-order autocorrelation: Every other measurement is related etc… Autocorrelation violates the independence condition. Regression analysis of autocorrelated data can produce misleading results.

17 Transforming (Re-expressing) Data An aside On using technology:

18 Linearity Some data show departures from linearity. Example: Auto Weight vs. Fuel Efficiency Linearity condition is not satisfied. Transforming (Re-expressing) Data

19 Linearity In cases involving upward bends of negatively- correlated data, try analyzing –1/ y (negative reciprocal of y ) vs. x instead. Linearity condition now appears satisfied. Transforming (Re-expressing) Data

20 The auto weight vs. fuel economy example illustrates the principle of transforming data. There is nothing sacred about the way x -values or y - values are measured. From the standpoint of measurement, all of the following may be equally- reasonable: x vs. y x vs. –1/ y x 2 vs. y x vs. log ( y) One or more of these transformations may be useful for making data more linear, more normal, etc. Transforming (Re-expressing) Data

21 Goals of Re-expression Goal 1 Make the distribution of a variable more symmetric. Transforming (Re-expressing) Data

22 Goals of Re-expression Goal 2 Make the spread of several groups more alike. We’ll see methods later in the book that can be applied only to groups with a common standard deviation. Transforming (Re-expressing) Data

23 Looking back  Make sure the relationship is straight enough to fit a regression model.  Beware of extrapolating.  Treat unusual points honestly. You must not eliminate points simply to “get a good fit”.  Watch out when dealing with data that are summaries.  Re-express your data when necessary.


Download ppt "STA291 Statistical Methods Lecture 28. Extrapolation and Prediction Extrapolating – predicting a y value by extending the regression model to regions."

Similar presentations


Ads by Google