Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSC323 – Week 3 Regression line

Similar presentations


Presentation on theme: "CSC323 – Week 3 Regression line"— Presentation transcript:

1 CSC323 – Week 3 Regression line Residual analysis and diagnostics for linear regression

2 Regression line – Fitting a line to data
If the scatter plot shows a clear linear pattern: a straight line through the points can describe the overall pattern Fitting a line means drawing a line that is as close as possible to the points: the “best” straight line is the regression line. Birth rate (1,000 pop) Log G.N.P.

3 Prediction errors For a given x, use the regression line to predict the response The accuracy of the prediction depends on how much spread out the observations are around the line. Y Observed value y Error Predicted value x

4 Simple Example: Productivity level
To see how productivity was related to level of maintenance, a firm randomly selected 5 of its high speed machines for an experiment. Each machine was randomly assigned a different level of maintenance X and then had its average number of stoppage Y recorded. These are the data: Correlation coefficient r =  0.94 Simple Example: Productivity level To see how productivity was related to level of maintenance, a firm randomly selected 5 of its high speed machines for an experiment. Each machine was randomly assigned a different level of maintenance X and then had its average number of stoppage Y recorded. 1.8  1.6  1.4  1.2  1.0  0.8  0.6  0.4  0.2  Hours X Average int. Y 4 1.6 6 1.2 8 1.1 10 0.5 12 0.6 Ave(x)=8 s(x)=3.16 Ave(y)=1s(y)=0.45 r=–0.94 # interruptions | | | | | | | | X Hours of maintenance

5 Least squares regression line
Definition The regression line of y on x is the line that makes the sum of the squares of the vertical distances (deviations) of the data points from the line as small as possible It is defined as Note: b has the same sign of r Where b = r*s.d.(y)/s.d.(x) a = ave(y) – b*ave(x) We use to distinguish between the values predicted from the regression line and the observed values

6 Example: cont. The regression line of the number of interruptions and the hours of maintenance per week is calculated as follows. The descriptive statistics for x and y are: Ave(x)=8 s(x)=3.16; Ave(y)=1 s(y)=0.45 and r=–0.94 Slope Intercept a=ave(y) –b  ave(x)=1– (–0.135)  8=2.08 Regression Line: = 2.08 –0.135 x = 2.08 –0.135 hours

7 Regression line = 2.08 –0.135 hours
To draw a line: find two points that satisfy the regression equation and connect them!  Point of averages (8,1)  Point on the line: (6,1.27) found by plugging x=6 into the regression equation, s.t. y= *6=1.27 Hours of maintenance 1.8  1.6  1.4  1.2  1.0  0.8  0.6  0.4  0.2  | | | | | | | | X # interruptions r=–0.94 residual Point of averages

8 Example: CPU Usage A study was conducted to examine what factors affect the CPU usage. A set of 38 processes written in a programming language was considered. For each program, data were collected on the CPU usage (time) in seconds of time, and the number of lines (line) in thousands generated by the program execution. CPU usage The scatter plot shows a clear positive association. We’ll fit a regression line to model the association! Number of lines

9 The regression line is Variable N Mean Std Dev Sum Minimum Maximum
Y time X linet Pearson Correlation Coefficients = The regression line is

10 Coefficient of determination
Goodness of fit measures Coefficient of determination R2 = (correlation coefficient)2 describes how good the regression line is in explaining the response y. fraction of the variation in the values of y that is explained by the regression line of y on x. Varies between 0 and 1. Values close to 1, then the regression line provides a good explanation of the data ; close to zero, then the regression line is not able to capture the variability in the data EXAMPLE (cont.): The correlation coefficient is r = –0.94. The regression line is able to capture 88.3% of the variability in the data. 

11 A prediction error in statistics is called residual
2. Residuals The vertical distances between the observed points and the regression line can be regarded as the “left-over” variation in the response after fitting the regression line. A residual is the difference between an observed value of the response variable y and the value predicted by the regression line. Residual e = observed y – predicted = y – A special property: the average of the residuals is always zero. A prediction error in statistics is called residual

12 EXAMPLE: Residuals for the regression line = 2.08 – 0.135 x
for the number of interruptions Y on the hours of maintenance X. Hours X Average interr. Y Predicted Interr. Residual y – 4 1.6 2.08 – 0.135*4=1.54 1.6 – 1.54=0.06 6 1.2 2.08 – 0.135*6=1.27 1.2 – 1.27 = –0.07 8 1.1 2.08 – 0.135*8=1 1.1 –1=0.1 10 0.5 2.08 – 0.135*10=0.73 0.5 –0.73= –0.23 12 0.6 2.08 – 0.135*12=0.46 0.6 –0.46=0.14 Average=0

13 3. Accuracy of the predictions
If the cloud of points is football-shaped, the prediction errors are similar along the regression line. One possible measure of the accuracy of the regression predictions is given by the root mean square error (r.m.s. error). The r.m.s. error is defined as the square root of the average squared residual: This is an estimate of the variation of y about the regression line.

14 1 r.m.s. error Roughly 68% of the points 2 r.m.s. errors Roughly 95% of the points

15 Computing the r.m.s.error:
Hours X Average interr. Y Predicted Interr. Residual Squared Residual 4 1.6 1.54 0.06 0.0036 6 1.2 1.27 0.07 0.0049 8 1.1 1 0.1 0.01 10 0.5 0.73 –0.23 0.053 12 0.6 0.46 0.14 0.0196 Total 0.0911 Computing the r.m.s.error: The r.m.s. error is  (0.0911/4) = 0.151 If the company will schedule 7 hours of maintenance per week, the predicted weekly number of interruptions of the machine will be =2.08 – 0.1357=1.135 on average. Using the r.m.s. error, more likely the number of interruptions will be between 1.135–2*0.151=0.833 and *0.151=1.437.

16 Looking at vertical strips
When all the vertical strips in a scatter plot show similar amount of spread then the diagram is said to be homoscedastic. A football-shaped cloud of points is homoscedastic!! Consider the data on the birth rate and the GNP index in 97 countries. Birth rate (1,000 pop) predicted points in corresponding strips Log G.N.P.

17 In a football-shaped scatter diagram, consider the points in a vertical strip. The value predicted by the regression line can be regarded as the average of their y-values. Their standard deviation is about equal to the r.m.s. error of the regression line. is the average of the y-values in the strip s.d. is roughly the r.m.s error Log G.N.P.

18 Computing the r.m.s. error
In large data sets, the r.m.s. error is approximately equal to Consider the example on birth rate & GNP index Average st. dev. Birth rate Y 29.33 13.55 Log G.N.P. X 7.51 1.65 r = – 0.74 The regression line is For x=8 the predicted birth rate is How accurate is this prediction?

19  The r.m.s.error is =sqrt(1 –0.74^2)*13.55=9.11
Thus 68% of the countries with log GNP=8, equal to about 3000$ per capita have birth rate between – 9.11=17.24 and =35.46. Most likely the countries with log GNP =8 have birth rate between (8.13, 44.57) since – 2*9.11=8.13 and *9.11=44.57 Birth rate 8.13 95% of the points in the strip for logGNP=8 44.57 Log G.N.P.

20 Detect problems in the regression analysis: the residual plots
The analysis of the residuals is useful to detect possible problems and anomalies in the regression A residual plot is a scatter plot of the regression residuals against the explanatory variable. Points should be randomly scattered inside a band centered around the horizontal line at zero (the mean of the residuals).

21 “Good case” X “Bad cases” Non linear relationship
Residual X “Bad cases” Non linear relationship Variation of y changing with x

22 Anomalies in the regression analysis
If the residual plot displays a curve  the straight line is not a good description of the association between x and y If the residual plot is fan-shaped  the variation of y is not constant. In the figure above, predictions on y will be less precise as x increases, since y shows a higher variability for higher values of x. Be careful if you use r.m.s. error!!

23 Example of CPU usage data Residual plot
Do you see any striking pattern?

24 Example: 100 meter dash At the 1987 World Championship in Rome, Ben Johnson set a new world record in the 100-meter dash. Scatter plot for Johnson’s times The data: Y=the elapsed time from the start of the race in 10-meter increments for Ben Johnson, X= meters Elapsed time Meters Johnson Average St. dev Correlation = 0.999 Meters

25 Regression Line The fitted regression line is =1.11+0.09 meters.
Elapsed time Meters The fitted regression line is = meters. The value of R2 is 0.999, therefore 99.9% of the variability in the data is explained by the regression line.

26 Does the graph show any anomaly?
Residual Plot Residual Meters Does the graph show any anomaly?

27 Confounding factor A confounding factor is a variable that has an important effect on the relationship among the variables in a study but it is not included in the study. Example: The mathematics department of a large university must plan the timetable for the following year. Data are collected on the enrollment year, the number x of first-year students and the number y of students enrolled in elementary math courses. The fitted regression line has equation: = x R2=0.694.

28 Residual Analysis Do the residuals have a random pattern?

29 Scatter plot of residuals vs year
Enrollment year The plot of the residuals against the year suggests that a change took place between 1994 and This caused a higher number of students to take math courses (one school changed its curriculum).

30 Outliers and Influential points
An outlier is an observation that lies outside the overall pattern of the other observation Large residual outlier

31 Influential Point An observation is influential for the regression line, if removing it would change considerably the fitted line. An influential point pulls the regression line towards itself. Regression line if  is omitted Influential point

32 Example: house prices in Albuquerque.
The coefficient of determination is R2= Annual tax What does the value of R2 say? Selling price

33 Are there any possible outliers &/or influential point?
Diagnostics Plots Residual Plot DFFITS plot Residual DFFITS Price Studentized Residual Price Studentized Residual Are there any possible outliers &/or influential point? Price

34 New analysis: omitting the influential points
Previous regression line The regression line is = price The coefficient of determination is R2=0.8273 Annual tax Selling price The new regression line explains 82% of the variation in y .

35 Extrapolation Extrapolation is when we use a regression equation to predict values outside of this range. This is dangerous and often inappropriate, and may produce unreasonable answers. Example: a linear model which relates weight gain to age for young children. Applying such a model to adults, or even teenagers, would be absurd. Example on selling price of houses, the regression line should not be used to predict the annual taxes for expensive houses that cost over 500,000 dollars

36 Summary – Warnings Correlation measures linear association, regression line should be used only when the association is linear Extrapolation – do not use the regression line to predict values outside the observed range – predictions are not reliable Correlation and regression line are sensitive to influential / extreme points Check residual plots to detect anomalies and “hidden” patterns which are not captured by the regression line

37 Example of regression analysis
Leaning Tower of Pisa response variable the lean (Y) = the distance between where a point at the top of the tower is and where it would be if the tower were straight. The units for the lean are tenths of a millimeter above 2.9 meters. explanatory variable time (X) ( ) Steps of our analysis: plot fit a line predict the future lean

38 Regression line The equation of the regression line is =

39 Residual Plot Normal probability plot

40 Obs YEAR LEAN Value Residual
Predict Obs YEAR LEAN Value Residual Prediction in 2002

41 PROC REG in SAS PROC REG; MODEL yvar=xvar1;
PLOT yvar*xvar/nostat;  draw scatter plot and regression line PLOT residual.*xvar1 residual.*predicted.; residual plots PLOT npp.*residual.;  probability plot for the residuals PLOT yvar*xvar/PRED;  draw scatter plot & upper and lower prediction bounds. RUN; The option nostat in the PLOT statement eliminates the equation of the regression line that is displayed in the regression plot. The option lineprinter produces lineprinter plots (low-level graphics)

42 SAS Proc GPLOT & REG SAS Data Step data pisa; input year lean @@;
datalines; 87 757 ; SAS Proc GPLOT & REG proc gplot; plot lean*year; proc reg; model lean=year; plot lean*year/pred; plot residual.*year; plot npp.*residual.; run;

43 The REG Procedure Dependent Variable: lean lean in tenths of millimeter Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total Root MSE R-Square Dependent Mean Adj R-Sq Coeff Var Parameter Estimates Parameter Standard Variable Label DF Estimate Error t Value Pr > |t| Intercept Intercept year <.0001


Download ppt "CSC323 – Week 3 Regression line"

Similar presentations


Ads by Google