Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Outliers and Influential Observations KNN Ch. 10 (pp. 390-406)

Similar presentations


Presentation on theme: "1 Outliers and Influential Observations KNN Ch. 10 (pp. 390-406)"— Presentation transcript:

1 1 Outliers and Influential Observations KNN Ch. 10 (pp. 390-406)

2 2  At times data sets have observations that are outlying or extreme.  These outliers usually have a strong effect on the regression analysis.  We have to identify such observations and then decide if they need to be eliminated or if their influence needs to be reduced.  When dealing with more than one variable, simple plots (boxplots, scatterplots etc.) may not be useful to identify outliers and we have to use the residuals or functions of residuals.  We will now look at some of these functions. Outlying Observations

3 3 Previously, we examined:  Residuals  Semistudentized Residuals We will now introduce a few refinements that are more effective in identifying Y outliers. First we need to recall the Hat Matrix. Residuals and Semistudentized Residuals

4 4 Leverages  We previously defined the Hat matrix as H = X(X’X) -1 X’  Using the hat matrix, and e = (I-H)Y  The diagonal elements of the hat matrix, h ii, 0< h ii < 1, are called Leverages  These are used to detect influential X observations. Leverage values are useful for detecting hidden extrapolations when p > 3

5 5 Measures for Y-outlier detection  An estimator of the st. deviation of the i-th residual is  Therefore, dividing each residual by its st. deviation we obtain the Studentized Residuals:

6 6 Measures for Y-outlier detection Another effective measure for Y outlier identification is obtained when we delete observation i, fit the regression function to the remaining n – 1 observations, and obtain the expected value for that observation given its X levels. The differences between the predicted and the actually observed value produces a deleted residual. This can be also expressed using a leverage value.  Deleted Residuals:  Studentized Deleted Residuals

7 7 Criterion for Outliers:  In order to establish that the i th observation is an outlier we have to compare the value of t i with t, where t is the 100*(1-  /2n) th percentile of the t distribution with (n-p-1) degrees of freedom. Detection of outlying Y Observations

8 8 Outlying X Observations  The average value is Criterion for Outliers:  If h ii > 2 p/n, then observation i is an X outlier.

9 9 X1 X3 X2 Y A Simple Example

10 10 Regression Analysis The regression equation is Y = 0.236 + 9.09 X1 - 0.203 X2 - 0.330 X3 Predictor Coef StDev T P Constant 0.2361 0.2545 0.93 0.355 X1 9.090 1.718 5.29 0.000 X2 -0.20286 0.05894 -3.44 0.001 X3 -0.3303 0.2229 -1.48 0.141 S = 1.802 R-Sq = 95.7% R-Sq(adj) = 95.6% Analysis of Variance Source DF SS MS F P Regression 3 9833.0 3277.7 1009.04 0.000 Residual Error 137 445.0 3.2482 Total 140 10278.1 Y Pred. 69.67861.47298.205085.575806.31840 0.333353 39.699. 45.9067. -6.20772. -3.78041. -3.97989. 0.169902. YResid.Stud.Res.Del. Stud. Res.h ii A Simple Example (continued)

11 11 Influence of Outlying X/Y Observations  Influence on single fitted value: influence that case i has on the fitted value. Omission is the test.  Exclusion causes major changes in fitted regression function; then a case is indeed influential. Criteria for Influential observations:  if |DFFITS i | >1 (small to medium data sets)  Or if |DFFITS i | > (large data sets) Where:

12 12 Influence of Outlying X/Y Observations  An aggregate measure is also required: One which measures the effect of omission of case i on all n “fitted” values, not just the i-th fitted value.  Statistic is Cook’s Distance: Criterion for Influential Observations:  Compare D i with the F distribution with (p, n-p) degrees of freedom. If the percentile (that D i cuts off from the left side of the distribution curve) is 10 or 20 the observation has little influence, if this percentile is 50 or more the influence is large.

13 13  Another measure is required: One which measures the effect of omission of case i on OLS estimates of regression coefficients (betas).  Here, c kk is the k-th diagonal element of (X’X) -1 Criteria for Influential observations:  if |DFBETAS i | > 1 for small data sets, or  if |DFBETAS i | > for large data sets. Influence of outliers on betas

14 14 Regression Analysis The regression equation is Y = 0.236 + 9.09 X1 - 0.203 X2 - 0.330 X3 Predictor Coef StDev T P Constant 0.2361 0.2545 0.93 0.355 X1 9.090 1.718 5.29 0.000 X2 -0.20286 0.05894 -3.44 0.001 X3 -0.3303 0.2229 -1.48 0.141 S = 1.802 R-Sq = 95.7% R-Sq(adj) = 95.6% Analysis of Variance Source DF SS MS F P Regression 3 9833.0 3277.7 1009.04 0.000 Residual Error 137 445.0 3.2482 Total 140 10278.1 69.6788.205085.575806.31840 0.333353 39.699. -6.20772. -3.78041. -3.97989. 0.169902. YResid.Stud.Res.Del. Stud. Res.h ii A Simple Example 4.467983.88654 -1.80055. 0.73128. DFFITSCOOKD


Download ppt "1 Outliers and Influential Observations KNN Ch. 10 (pp. 390-406)"

Similar presentations


Ads by Google