Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better.

Similar presentations


Presentation on theme: "Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better."— Presentation transcript:

1 Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better models Better variables (interaction, transformations) Assumption checking Outliers and influential cases Creating subsets of the data Better models Better variables (interaction, transformations) Assumption checking Outliers and influential cases Creating subsets of the data

2 Finding help

3 Stata manuals You have all these as pdf! Check the folder /Stata12/docs

4 ASSUMPTION CHECKING AND OTHER NUISANCES In regression analysis with Stata In logistic regression analysis with Stata NOTE: THIS WILL BE EASIER IN Stata THAN IT WAS IN SPSS

5 Assumption checking in “normal” multiple regression with Stata

6 6 Assumptions in regression analysis No multi-collinearity All relevant predictor variables included Homoscedasticity: all residuals are from a distribution with the same variance Linearity: the “true” model should be linear. Independent errors: having information about the value of a residual should not give you information about the value of other residuals Errors are distributed normally

7 7 FIRST THE ONE THAT LEADS TO NOTHING NEW IN STATA (NOTE: SLIDE TAKEN LITERALLY FROM MMBR) Independent errors: having information about the value of a residual should not give you information about the value of other residuals Detect: ask yourself whether it is likely that knowledge about one residual would tell you something about the value of another residual. Typical cases: -repeated measures -clustered observations (people within firms / pupils within schools) Consequences: as for heteroscedasticity Usually, your confidence intervals are estimated too small (think about why that is!). Cure: use multi-level analyses  part 2 of this course

8 The rest, in Stata: Example: the Stata “auto.dta” data set sysuse auto corr (correlation) vif (variance inflation factors) ovtest (omitted variable test) hettest (heterogeneity test) predict e, resid swilk(test for normality)

9 Finding the commands “help regress”  “regress postestimation” and you will find most of them (and more) there

10 10 Multi-collinearity A strong correlation between two or more of your predictor variables You don’t want it, because: 1.It is more difficult to get higher R’s 2.The importance of predictors can be difficult to establish (b-hats tend to go to zero) 3.The estimates for b-hats are unstable under slightly different regression attempts (“bouncing beta’s”) Detect: 1.Look at correlation matrix of predictor variables 2.calculate VIF-factors while running regression Cure: Delete variables so that multi-collinearity disappears, for instance by combining them into a single variable

11 11 Stata: calculating the correlation matrix (“corr” or “pwcorr”) and VIF statistics (“vif”)

12 12 Misspecification tests (replaces: all relevant predictor variables included [Ramsey]) Also run “ovtest, rhs” here. Both tests should be non-significant. Note that there are two ways to interpret “all relevant predictor variables included”

13 13 Homoscedasticity: all residuals are from a distribution with the same variance Consequences: Heteroscedasticiy does not necessarily lead to biases in your estimated coefficients (b-hat), but it does lead to biases in the estimate of the width of the confidence interval, and the estimation procedure itself is not efficient.

14 Testing for heteroscedasticity in Stata Your residuals should have the same variance for all values of Y  hettest Your residuals should have the same variance for all values of X  hettest, rhs

15 15 Errors distributed normally Errors should be distributed normally (just the errors, not the variables themselves!) Detect: look at the residual plots, test for normality, or save residuals and test directly Consequences: rule of thumb: if n>600, no problem. Otherwise confidence intervals are wrong. Cure: try to fit a better model (or use more difficult ways of modeling instead - ask an expert).

16 First calculate the residuals (after regress): predict e, resid Then test for normality swilk e Errors distributed normally

17 Assumption checking in multi-level multiple regression with Stata

18 In multi-level Test all that you would test for multiple regression – poor man’s test: do this using multiple regression! (e.g. “hettest”) Add: xttest0 (see last week) Add (extra): Test visually whether the normality assumption holds 

19 Note: extra material (= not on the exam, bonus points if you know how to use it) tab school, gen(sch_) reg y sch2 – sch28 gen coefs =. for num 2/28: replace coefs =_b[schX] if _n==X swilk coefs

20 Assumption checking in logistic regression with Stata Note: based on http://www.ats.ucla.edu/stat/stata/web books/logistic/chapter3/statalog3.ht m

21 Assumptions in logistic regression Y is 0/1 Independence of errors (as in multiple regression) No cases where you have complete separation (Stata will try to remove these cases automatically) Linearity in the logit (comparable to “the true model should be linear” in multiple regression) – “specification error” No multi-collinearity (as in m.r.)

22 What will happen if you try logit y x1 x2 in this case?

23 This! Because all cases with x==1 lead to y==1, the weight of x should be +infinity. Stata therefore rightly disregards these cases. Do realize that, even though you do not see them in the regression, these are extremely important cases!

24 (checking for) multi-collinearity In regression, we had “vif” Here we need to download a command that a user-created: “collin” (try “findit collin” in Stata)

25 (checking for) specification error The equivalent for “ovtest” is the command “linktest”

26 (checking for) specification error – part 2

27 Further things to do: Check for useful transformations of variables, and interaction effects Check for outliers / influential cases: 1) using a plot of stdres (against n) and dbeta (against n) 2) using a plot of ldfbeta’s (against n) 3) using regress and diag (but don’t tell anyone that I suggested this)

28 Checking for outliers / influential cases … check the file auto_outliers.do for this …

29 Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better models Better variables (interaction, transformations) Assumption checking Outliers and influential cases Creating subsets of the data Better models Better variables (interaction, transformations) Assumption checking Outliers and influential cases Creating subsets of the data

30 Example analyses on ideas.dta

31 For next week: improve the logistic regression you had Annotated output: as if you write an exam assignment... 1.Create do-file with comments in it 2.Run it and add further comments on the outcomes in the log file 3.Submit do-file and log-file Use your own assignment, and the skills you mastered today. Deadline: coming Wednesday

32 Online also: the taxi tipping data


Download ppt "Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better."

Similar presentations


Ads by Google