Download presentation

1
**Best subsets regression**

Model selection Best subsets regression

2
Statement of problem A common problem is that there is a large set of candidate predictor variables. Goal is to choose a small subset from the larger set so that the resulting regression model is simple, yet have good predictive ability.

3
Example: Cement data Response y: heat evolved in calories during hardening of cement on a per gram basis Predictor x1: % of tricalcium aluminate Predictor x2: % of tricalcium silicate Predictor x3: % of tetracalcium alumino ferrite Predictor x4: % of dicalcium silicate

4
Example: Cement data

5
**Two basic methods of selecting predictors**

Stepwise regression: Enter and remove predictors, in a stepwise manner, until no justifiable reason to enter or remove more. Best subsets regression: Select the subset of predictors that do the best at meeting some well-defined objective criterion.

6
**Why best subsets regression?**

# of predictors (p-1) # of regression models 1 2 : ( ) (x1) 2 4 : ( ) (x1) (x2) (x1, x2) 3 8: ( ) (x1) (x2) (x3) (x1, x2) (x1, x3) (x2, x3) (x1, x2, x3) 4 16: 1 none, 4 one, 6 two, 4 three, 1 four

7
**Why best subsets regression?**

If there are p-1 possible predictors, then there are 2p-1 possible regression models containing the predictors. For example, 10 predictors yields 210 = 1024 possible regression models. A best subsets algorithm determines the best subsets of each size, so that choice of the final model can be made by researcher.

8
**What is used to judge “best”?**

R-squared Adjusted R-squared MSE (or S = square root of MSE) Mallow’s Cp

9
R-squared Use the R-squared values to find the point where adding more predictors is not worthwhile because it leads to a very small increase in R-squared.

10
**Adjusted R-squared or MSE**

Adjusted R-squared increases only if MSE decreases, so adjusted R-squared and MSE provide equivalent information. Find a few subsets for which MSE is smallest (or adjusted R-squared is largest) or so close to the smallest (largest) that adding more predictors is not worthwhile.

11
Mallow’s Cp criterion The goal is to minimize the total standardized mean square error of prediction: which equals: which in English is:

12
**Mallow’s Cp criterion Mallow’s Cp statistic estimates where:**

SSEp is the error sum of squares for the fitted (subset) regression model with p parameters. MSE(X1,…, Xp-1) is the MSE of the model containing all p-1 predictors. It is an unbiased estimator of σ2. p is the number of parameters in the (subset) model

13
**Facts about Mallow’s Cp**

Subset models with small Cp values have a small total standardized MSE of prediction. When the Cp value is … near p, the bias is small (next to none), much greater than p, the bias is substantial, below p, it is due to sampling error; interpret as no bias. For the largest model with all possible predictors, Cp= p (always).

14
**Using the Cp criterion So, identify subsets of predictors for which:**

the Cp value is smallest, and the Cp value is near p (if possible) In general, though, don’t always choose the largest model just because it yields Cp= p.

15
**Best Subsets Regression: y versus x1, x2, x3, x4**

Response is y x x x x Vars R-Sq R-Sq(adj) C-p S X X X X X X X X X X X X X X X X

16
**Stepwise Regression: y versus x1, x2, x3, x4**

Alpha-to-Enter: Alpha-to-Remove: 0.15 Response is y on 4 predictors, with N = 13 Step Constant x T-Value P-Value x T-Value P-Value x T-Value P-Value S R-Sq R-Sq(adj) C-p

17
Example: Modeling PIQ

18
**Best Subsets Regression: PIQ versus MRI, Height, Weight**

Response is PIQ H W e e i i M g g R h h Vars R-Sq R-Sq(adj) C-p S I t t X X X X X X X X X

19
**Stepwise Regression: PIQ versus MRI, Height, Weight**

Alpha-to-Enter: Alpha-to-Remove: 0.15 Response is PIQ on 3 predictors, with N = 38 Step Constant MRI T-Value P-Value Height T-Value P-Value S R-Sq R-Sq(adj) C-p

20
Example: Modeling BP

21
**Best Subsets Regression: BP versus Age, Weight, ...**

Response is BP D u W r S e a P t i t u r A g B i l e g h S o s s Vars R-Sq R-Sq(adj) C-p S e t A n e s X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X

22
Stepwise Regression: BP versus Age, Weight, BSA, Duration, Pulse, Stress Alpha-to-Enter: Alpha-to-Remove: 0.15 Response is BP on 6 predictors, with N = 20 Step Constant Weight T-Value P-Value Age T-Value P-Value BSA T-Value P-Value S R-Sq R-Sq(adj) C-p

23
**Best subsets regression**

Stat >> Regression >> Best subsets … Specify response and all possible predictors. If desired, specify predictors that must be included in every model. (Researcher’s knowledge!) Select OK. Results appear in session window.

Similar presentations

OK

Inference with Computer Printouts. Leaning Tower of Pisa Find a 90% confidence interval. Year75777880818283848587 Lean642656667688696698713717725757.

Inference with Computer Printouts. Leaning Tower of Pisa Find a 90% confidence interval. Year75777880818283848587 Lean642656667688696698713717725757.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on lok sabha election 2014 Download ppt on sets for class 11 Ppt on bond length is the distance Download ppt on transportation in human beings the largest Ppt on current account deficit and currency Ppt on network switching methods Ppt on art and architecture of delhi sultanate Ppt on panel discussion moderator Ppt on trans-siberian railway route Ppt on summary writing for kids