Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multiple Regression Selecting the Best Equation. Techniques for Selecting the "Best" Regression Equation The best Regression equation is not necessarily.

Similar presentations


Presentation on theme: "Multiple Regression Selecting the Best Equation. Techniques for Selecting the "Best" Regression Equation The best Regression equation is not necessarily."— Presentation transcript:

1 Multiple Regression Selecting the Best Equation

2 Techniques for Selecting the "Best" Regression Equation The best Regression equation is not necessarily the equation that explains most of the variance in Y (the highest R 2 ). This equation will be the one with all the variables included. The best equation should also be simple and interpretable. (i.e. contain a small no. of variables). Simple (interpretable) & Reliable - opposing criteria. The best equation is a compromise between these two.

3 We will discuss several strategies for selecting the best equation: 1.All Possible Regressions Uses R 2, s 2, Mallows C p C p = RSS p /s 2 complete - [n-2(p+1)] 2."Best Subset" Regression Uses R 2,R a 2, Mallows C p 3.Backward Elimination 4.Stepwise Regression

4 An Example In this example the following four chemicals are measured: X 1 = amount of tricalcium aluminate, 3 CaO - Al 2 O 3 X 2 = amount of tricalcium silicate, 3 CaO - SiO 2 X 3 = amount of tetracalcium alumino ferrite, 4 CaO - Al 2 O 3 - Fe 2 O 3 X 4 = amount of dicalcium silicate, 2 CaO - SiO 2 Y = heat evolved in calories per gram of cement.

5 The data is given below: X1X1 X2X2 X3X3 X4X4 Y 72666079 129155274 1156820104 113184788 75263396 1155922109 371176103 131224473 254182293 2147426116 140233484 1166912113 1068812109

6 I All Possible Regressions Suppose we have the p independent variables X 1, X 2,..., X p. Then there are 2 p subsets of variables

7 Variables in EquationModel no variablesY =  0 +  X 1 Y =  0 +  1 X 1 +  X 2 Y =  0 +  2 X 2 +  X 3 Y =  0 +  3 X 3 +  X 1, X 2 Y =  0 +  1 X 1 +  2 X 2 + e X 1, X 3 Y =  0 +  1 X 1 +  3 X 3 +  X 2, X 3 Y =  0 +  2 X 2 +  3 X 3 + e and X 1, X 2, X 3 Y =  0 +  1 X 1 +  2 X 2 +  2 X 3 + 

8 Use of R 2 1.Assume we carry out 2 p runs for each of the subsets. Divide the Runs into the following sets Set 0: No variables Set 1:One independent variable.... Set p: p independent variables. 2. Order the runs in each set according to R 2. 3. Examine the leaders in each run looking for consistent patterns - take into account correlation between independent variables.

9 Example (k=4) X 1, X 2, X 3, X 4 Variables in for leading runs100 R 2 % Set 1: X 4.67.5 % Set 2: X 1, X 2.97.9 % X 1, X 4 97.2 % Set 3: X 1, X 2, X 4.98.234 % Set 4: X 1, X 2, X 3, X 4.98.237 % Examination of the correlation coefficients reveals a high correlation between X 1, X 3 (r 13 = -0.824) and between X 2, X 4 (r 24 = -0.973). Best Equation Y =  0 +  1 X 1 +  4 X 4 + 

10 Use of R 2 Number of variables required, p, coincides with where R 2 begins to level out

11 Use of the Residual Mean Square (RMS) (s 2 ) When all of the variables having a non-zero effect have been included in the mode then the residual mean square is an estimate of s 2. If "significant" variables have been left out then RMS will be biased upward.

12 No. of Variables pRMS s 2 (p)Average s 2 (p) 1115.06, 82.39,1176.31, 80.35113.53 25.79*,122.71,7.48**,86.59.17.5747.00 35.35, 5.33, 5.65, 8.206.13 45.985.98 *- run X 1, X 2 **- run X 1, X 4 s 2 - approximately 6.

13 Use of s 2 Number of variables required, p, coincides with where s 2 levels out

14 Use of Mallows C p If the equation with p variables is adequate then both s 2 complete and RSS p /(n-p-1) will be estimating s 2. If "significant" variables have been left out then RMS will be biased upward.

15 Then Thus if we plot, for each run, Cp vs p and look for Cp close to p + 1 then we will be able to identify models giving a reasonable fit.

16 RunCpp + 1 no variables443.21 1,2,3,4202.5, 142.5, 315.2, 138.72 12,13,142.7, 198.1, 5.53 23,24,3462.4, 138.2, 22.4 123,124,134,2343.0, 3.0, 3.5, 7.54 12345.05

17 Use of C p Number of variables required, p, coincides with where C p becomes close to p + 1 CpCp p

18 II "Best Subset" Regression Similar to all possible regressions. If p, the number of variables, is large then the number of runs, 2 p, performed could be extremely large. In this algorithm the user supplies the value K and the algorithm identifies the best K subsets of X 1, X 2,..., X p for predicting Y.

19 III Backward Elimination In this procedure the complete regression equation is determined containing all the variables - X 1, X 2,..., X p. Then variables are checked one at a time and the least significant is dropped from the model at each stage. The procedure is terminated when all of the variables remaining in the equation provide a significant contribution to the prediction of the dependent variable Y.

20 The precise algorithm proceeds as follows: 1.Fit a regression equation containing all variables in the equation.

21 2.A partial F-test is computed for each of the independent variables still in the equation. where RSS 1 = the residual sum of squares with all variables that are presently in the equation, RSS 2 = the residual sum of squares with on of the variables removed, and MSE 1 = the Mean Square for Error with all variables that are presently in the equation. The Partial F statistic:

22 3.The lowest partial F value is compared with F   for some pre-specified . If F Lowest  F   then remove that variable and return to step 2. If F Lowest > F   then accept the equation as it stands.

23 Example (k=4) (same example as before) X 1, X 2, X 3, X 4 1. X 1, X 2, X 3, X 4 in the equation. The lowest partial F = 0.018 (X 3 ) is compared with F  (1,8)  = 3.46 for  = 0.01  Remove X 3.

24 2. X 1, X 2, X 4 in the equation. The lowest partial F = 1.86 (X 4 ) is compared with F  (1,9) = 3.36  for  0.01. Remove X 4.

25 Partial F for both variables X 1 and X 2 exceed F  (1,10) = 3.36 for  3. X 1, X 2 in the equation. Equation is accepted as it stands. Y = 52.58 + 1.47 X 1 + 0.66 X 2 Note : F to Remove = partial F.

26 IV Stepwise Regression In this procedure the regression equation is determined containing no variables in the model. Variables are then checked one at a time using the partial correlation coefficient as a measure of importance in predicting the dependent variable Y. At each stage the variable with the highest significant partial correlation coefficient is added to the model. Once this has been done the partial F statistic is computed for all variables now in the model is computed to check if any of the variables previously added can now be deleted.

27 This procedure is continued until no further variables can be added or deleted from the model. The partial correlation coefficient for a given variable is the correlation between the given variable and the response when the present independent variables in the equation are held fixed. It is also the correlation between the given variable and the residuals computed from fitting an equation with the present independent variables in the equation.

28 Example (k=4) (same example as before) X 1, X 2, X 3, X 4 1. With no variables in the equation. The correlation of each independent variable with the dependent variable Y is computed. The highest significant correlation ( r = -0.821) is with variable X 4. Thus the decision is made to include X 4. Regress Y with X 4 -significant thus we keep X 4.

29 2.Compute partial correlation coefficients of Y with all other independent variables given X 4 in the equation. The highest partial correlation is with the variable X 1. ( [r Y1.4 ] 2 = 0.915). Thus the decision is made to include X 1.

30 Regress Y with X 1, X 4. R 2 = 0.972, F = 176.63. For X 1 the partial F value =108.22 (F 0.10 (1,8) = 3.46) Retain X 1. For X 4 the partial F value =154.295 (F 0.10 (1,8) = 3.46) Retain X 4. Check to see if variables in the equation can be eliminated

31 3.Compute partial correlation coefficients of Y with all other independent variables given X 4 and X 1 in the equation. The highest partial correlation is with the variable X 2. ( [r Y2.14 ] 2 = 0.358). Thus the decision is made to include X 2. Regress Y with X 1, X 2,X 4. R 2 = 0.982. Lowest partial F value =1.863 for X 4 (F 0.10 (1,9) = 3.36) Remove X 4 leaving X 1 and X 2. Check to see if variables in the equation can be eliminated

32 Examples Using Statistical Packages


Download ppt "Multiple Regression Selecting the Best Equation. Techniques for Selecting the "Best" Regression Equation The best Regression equation is not necessarily."

Similar presentations


Ads by Google