Presentation is loading. Please wait.

Presentation is loading. Please wait.

Topic 18: Model Selection and Diagnostics

Similar presentations


Presentation on theme: "Topic 18: Model Selection and Diagnostics"— Presentation transcript:

1 Topic 18: Model Selection and Diagnostics

2 Variable Selection We want to choose a “best” model that is a subset of the available explanatory variables Two separate problems How many explanatory variables should we use (i.e., subset size) Given the subset size, which variables should we choose

3 KNNL Example Page 350, Section 9.2
Y : survival time of patient (liver op) X’s (explanatory variables) are Blood clotting score Prognostic index Enzyme function test Liver function test

4 KNNL Example cont. n = 54 patients
Start with the usual plots and descriptive statistics Time-to-event data is often heavily skewed and typically transformed with a log

5 Dummy variables for alcohol use
Data Tab delimited Data a1; infile 'U:\.www\datasets512\CH09TA01.txt‘ delimiter='09'x; input blood prog enz liver age gender alcmod alcheavy surv lsurv; run; Ln(surv) Dummy variables for alcohol use

6 Data Obs blood prog enz liver age gender alcmod alcheavy surv lsurv 1
6.7 62 81 2.59 50 695 6.544 2 5.1 59 66 1.70 39 403 5.999 3 7.4 57 83 2.16 55 710 6.565 4 6.5 73 41 2.01 48 349 5.854 5 7.8 65 115 4.30 45 2343 7.759 6 5.8 38 72 1.42 348 5.852

7

8

9

10

11 Log Transform of Y Recall that regression model does not require Y to be Normally distributed In this case, transform reduces influence of long right tail and often stabilizes the variance of the residuals

12 Scatterplots proc corr plot=matrix; var blood prog enz liver; run;
proc corr plot=scatter; with lsurv;

13

14

15

16

17

18 Pearson Correlation Coefficients, N = 54 Prob > |r| under H0: Rho=0
Correlation Summary Pearson Correlation Coefficients, N = 54 Prob > |r| under H0: Rho=0 blood prog enz liver lsurv <.0001 <.0001

19 The Two Problems in Variable Selection
To determine an appropriate subset size Might use adjusted R2, Cp, MSE, PRESS, AIC, SBC (BIC) To determine best model of this fixed size Might use R2

20 Adjusted R2 R2 by its construction is guaranteed to increase with p
SSE cannot decrease with additional X and SSTO constant Adjusted R2 uses df to account for p

21 Adjusted R2 Want to find model that maximizes
Since MSTO will remain constant for a given data set Depends only on Y Equivalent information to MSE Thus could also find choice of model that minimizes MSE Details on pages

22 Cp Criterion The basic idea is to compare subset models with the full model A subset model is good if there is not substantial “bias” in the predicted values (relative to the full model) Looks at the ratio of total mean squared error and the true error variance See page for details

23 Cp Criterion SSE based on a specific choice of p-1 variables
MSE based on the full set of variables Select the full set and Cp=(n-p)-(n-2p)=p

24 Use of Cp p is the number of regression coefficients including the intercept A model is good according to this criterion if Cp ≤ p Rule: Pick the smallest model for which Cp is smaller than p or pick the model that minimizes Cp, provided the minimum Cp is smaller than p

25 SBC (BIC) and AIC Criterion based on log(likelihood) plus a penalty for more complexity AIC – minimize SBC – minimize

26 Other approaches PRESS (prediction SS) For each case i
Delete the case and predict Y using a model based on the other n-1 cases Look at the SS for observed minus predicted Want to minimize the PRESS

27 Variable Selection Additional proc reg model statement options useful in variable selection INCLUDE=n forces the first n explanatory variables into all models BEST=n limits the output to the best n models of each subset size or total START=n limits output to models that include at least n explanatory variables

28 Variable Selection Step type procedures Forward selection (Step up)
Backward elimination (Step down) Stepwise (forward selection with a backward glance) Very popular but now have much better search techniques like BEST

29 Ordering models of the same subset size
Use R2 or SSE This approach can lead us to consider several models that give us approximately the same predicted values May need to apply knowledge of the subject matter to make a final selection Not that important if prediction is the key goal

30 Proc Reg proc reg data=a1; model lsurv= blood prog enz liver/
selection=rsquare cp aic sbc b best=3; run;

31 Selection Results Number in Model R-Square C(p) AIC SBC 1 0.4276
0.4215 0.2208 2 0.6633 0.5995 0.5486 3 0.7573 3.3905 0.7178 0.6121 4 0.7592 5.0000

32 Selection Results Number in Model Parameter Estimates Intercept blood
prog enz liver 1 . 2 3 4

33 Proc Reg proc reg data=a1; model lsurv= blood prog enz liver/
selection=cp aic sbc b best=3; run;

34 Selection Results Number in Model C(p) R-Square AIC SBC 3 3.3905 0.7573 4 5.0000 0.7592 0.7178 WARNING: “selection=cp” just lists the models in order based on lowest C(p), regardless of whether it is good or not

35 How to Choose with C(p) Want small C(p) Want C(p) near p
In original paper, it was suggested to plot C(p) versus p and consider the smallest model that satisfies these criteria Can be somewhat subjective when determining “near”

36 Proc Reg proc reg data=a1 outest=b1; model lsurv=blood prog enz liver/
Creates data set with estimates & criteria Proc Reg proc reg data=a1 outest=b1; model lsurv=blood prog enz liver/ selection=rsquare cp aic sbc b; run;quit; symbol1 v=circle i=none; symbol2 v=none i=join; proc gplot data=b1; plot _Cp_*_P_ _P_*_P_ / overlay; run;

37 Start to approach C(p)=p line here

38 Model Validation Since data used to generate parameter estimates, you’d expect model to predict fitted Y’s well Want to check model predictive ability for a separate data set Various techniques of cross validation (data split, leave one out) are possible

39 Regression Diagnostics
Partial regression plots Studentized deleted residuals Hat matrix diagonals Dffits, Cook’s D, DFBETAS Variance inflation factor Tolerance

40 KNNL Example Page 386, Section 10.1 Y is amount of life insurance
X1 is average annual income X2 is a risk aversion score n = 18 managers

41 Read in the data set data a1; infile ‘../data/ch10ta01.txt';
input income risk insur;

42

43 Partial regression plots
Also called added variable plots or adjusted variable plots One plot for each Xi

44 Partial regression plots
These plots show the strength of the marginal relationship between Y and Xi in the full model . They can also detect Nonlinear relationships Heterogeneous variances Outliers

45 Partial regression plots
Consider plot for X1 Use the other X’s to predict Y Use the other X’s to predict X1 Plot the residuals from the first regression vs the residuals from the second regression

46 The partial option with proc reg and plots=
proc reg data=a1 plots=partialplot; model insur=income risk /partial; run;

47 Output Analysis of Variance Source DF Sum of Squares Mean Square
F Value Pr > F Model 2 173919 86960 542.33 <.0001 Error 15 Corrected Total 17 176324 Root MSE R-Square 0.9864 Dependent Mean Adj R-Sq 0.9845 Coeff Var

48 Output Parameter Estimates Variable DF Parameter Estimate
Standard Error t Value Pr > |t| Tolerance Intercept 1 -18.06 <.0001 . income 30.80 risk 3.44 0.0037

49 Output The partial option on the model statement in proc reg generates graphs in the output window These are ok for some purposes but we prefer better looking plots To generate these plots we follow the regression steps outlined earlier and use gplot or plots=partialplot

50 Partial regression plots
*partial regression plot for risk; proc reg data=a1; model insur risk = income; output out=a2 r=resins resris; symbol1 v=circle i=sm70; proc gplot data=a2; plot resins*resris; run;

51 The plot for risk

52 Partial plot for income code not shown

53

54 Residual plot (vs risk)
proc reg data=a1; model insur= risk income; output out=a2 r=resins; symbol1 v=circle i=sm70; proc sort data=a2; by risk; proc gplot data=a2; plot resins*risk; run;

55 Residuals vs Risk

56 Residual plot (vs income)
proc sort data=a2; by income; proc gplot data=a2; plot resins*income; run;

57 Residuals vs Income

58

59

60

61 Other “Residuals” There are several versions of residuals
Our usual residuals ei= Yi – Studentized residuals Studentized means dividing by its standard error Are distributed t(n-p) ( ≈ Normal)

62 Other “Residuals” Studentized deleted residuals
Delete case i and refit the model Compute the predicted value for case i using this refitted model Compute the “studentized residual” Don’t do this literally but this is the concept

63 Studentized Deleted Residuals
We use the notation (i) to indicate that case i has been deleted from the model fit computations di = Yi is the deleted residual Turns out di = ei/(1-hii) Also Var di=(Var ei)/(1-hii)2=MSE(i)/(1- hii)

64 Residuals When we examine the residuals, regardless of version, we are looking for Outliers Non-normal error distributions Influential observations

65 The r option and studentized residuals
proc reg data=a1; model insur=income risk/r; run;

66 Output Obs Residual 1 -1.206 2 -0.910 3 2.121 4 -0.363 5 -0.210
Student Obs Residual

67 The influence option and studentized deleted residuals
proc reg data=a1; model insur=income risk /influence; run;

68 Output Obs Residual RStudent 1 -14.7311 -1.2259 2 -10.9321 -0.9048

69 Hat matrix diagonals hii is a measure of how much Yi is contributing to the prediction of = hi1Y1 + hi2 Y2 + hi3Y3 + … hii is sometimes called the leverage of the ith observation It is a measure of the distance between the X values for the ith case and the means of the X values

70 Hat matrix diagonals 0 ≤ hii ≤ 1 Σ(hii) = p
Large value of hii suggess that ith case is distant from the center of all X’s The average value is p/n Values far from this average point to cases that should be examined carefully

71 Influence option gives hat diagonals
Obs H

72 DFFITS A measure of the influence of case i on (a single case)
Thus, it is closely related to hii It is a standardized version of the difference between computed with and without case i Concern if greater than 1 for small data sets or greater than for large data sets

73 Cook’s Distance A measure of the influence of case i on all of the ’s (all the cases) It is a standardized version of the sum of squares of the differences between the predicted values computed with and without case I Compare with F(p,n-p) Concern if distance above 50%-tile

74 DFBETAS A measure of the influence of case i on each of the regression coefficients It is a standardized version of the difference between the regression coefficient computed with and without case i Concern if DFBETA greater than 1 in small data sets or greater than for large data sets

75 Variance Inflation Factor
The VIF is related to the variance of the estimated regression coefficients We calculate it for each explanatory variable One suggested rule is that a value of 10 or more for VIF indicates excessive multicollinearity

76 Tolerance TOL = (1-R2k) where R2k is the squared multiple correlation obtained in a regression where all other explanatory variables are used to predict Xk TOL = 1/VIF Described in comment on p 410

77 Output Variable Tolerance Intercept income risk

78 Full diagnostics proc reg data=a1; model insur=income risk
/r partial influence tol; id income risk; plot r.*(income risk); run;

79 Plot statement inside Reg
Can generate several plots within Proc Reg Need to know symbol names Available in Table 1 once you click on plot command inside REG syntax r. represents usual residuals rstudent. represents deleted resids p. represents predicted values

80 Last slide We went over KNNL Chapters 9 and 10
We used program topic18.sas to generate the output


Download ppt "Topic 18: Model Selection and Diagnostics"

Similar presentations


Ads by Google