Presentation is loading. Please wait.

Presentation is loading. Please wait.

VI. Logistic Regression

Similar presentations


Presentation on theme: "VI. Logistic Regression"— Presentation transcript:

1 VI. Logistic Regression

2 An event occurs or doesn’t.
A category applies to an observation or doesn’t. A student passes or fails. A patient survives or dies. A candidate wins or loses. A person is poor or not poor. A person is a citizen or not.

3 They are also examples of binary discrete phenomena.
These are examples of categorical data. They are also examples of binary discrete phenomena. Binary discrete phenomena usually take the form of a dichotomous indicator, or dummy, variable. It’s best to code binary discrete phenomena 0/1 so that the mean of the dummy variable equals the proportion of cases with a value of 1, & can be interpreted as a probability: e.g., mean of female=.545 (=sample’s probability of being female).

4 Here’s what we’re going to figure out how to interpret:

5 . logit hsci read math female, or nolog
Logit estimates Number of obs = LR chi2(3) = Prob > chi2 = Log likelihood = Pseudo R = hsci | Odds Ratio Std. Err z P>|z| [95% Conf. Interval] read | math | female |

6 OLS regression encounters serious problems in dealing with a binary dependent variable:
OLS’s explanatory variable coefficients can extend to positive or negative infinity, but binary probabilities & proportions can’t exceed 1 or fall below 0. OLS is premised on linearity, but with a binary dependent variable the effects of an explanatory variable are non-linear at the binary variable’s lower & upper levels.

7 (3) OLS is also premised on additivity, but with a binary outcome variable an explanatory variable’s effect depends on the relative effects of the other variables: if, say, one explanatory variable pushes the probability of the binary outcome variable near 0 or near 1, then the effects of the other explanatory variables can’t have much influence. (4) And because a binary outcome variable has just two values, it violates the OLS assumptions of normality &, more important, non-constant variance of residuals.

8 What to do? A logit transformation is an advantageous way of representing the S-shaped curve of a binary outcome variable’s y/x distribution. A probit transformation, which has somewhat thinner tails, is also commonly used. A complementary log-log transformation or a scobit transformation is often used if the binary outcome variable is highly skewed. There are other such transformations as well (see, e.g., Long & Freese; & the Stata manuals).

9 A logit transformation changes probabilities into logged odds, which eliminate the binary proportion (or probability) ceiling of 1. Odds express the likelihood of occurrence relative to the likelihood of non-occurrence (i.e. odds are the ratio of the proportions of the two possible outcomes [see Moore & McCabe, chap. 15, pages 40-42]): odds = event’s probability/1 – event’s probability probability = event’s odds/1 + event’s odds

10 To repeat, a logit transformation changes probabilities into logged odds, which eliminate the binary proportion (or probability) ceiling of 1. Logged odds are also known as logits (i.e. the natural log of the odds). Why not stick with odds rather than logged odds? Because logged odds (i.e. logits) also eliminate the binary proportion (or probability) floor of 0.

11 So, on balance, the logit transformation eliminates the outcome variable’s proportion (or probability) ceiling of 1 & floor of 0. Thus an explanatory variable coefficients can extend to positive or negative infinity.

12 Note: larger sample size is even more important for logistic regression than for OLS regression.

13 So that we get a feel for what’s going on, & can do the exercises in Moore & McCabe, chap. 15, let’s compute some odds & logged odds (i.e. logits).

14 Display the odds of being in honors math:
. tab hmath (>=60) Freq. Percent Cum. Total . display .245/( ) = .3245 Interpretation? The event occurs times per each time it does not occur. That is, there are occurrences per 100 non-occurrences.

15 Display the logged odds of being in honors math:
. display ln(.3245) = Display the odds of not being in honors math: . di .755/( ) = 3.083 Display the logged odds of not being in honors math: . di ln(3.083) =

16 Although we’ll never have to do the following—thankfully, software will do it for us automatically—let’s make a variable that combines the odds of being in honors math versus not being in honors math. . gen ohmath = if hmath==1 . replace ohmath = if hmath==0 . tab ohmath

17 Let’s transform ohmath into another variable that represents logged odds:
. gen lohmath = ln(ohmath) . su ohmath lohmath And how could we display lohmath not as logged odds but rather as odds (i.e. as ohmath)? . display exp(lohmath)

18 From the standpoint of regression analysis, why should we indeed have transformed the variable into logged odds? That is, what are the advantages of doing so?

19 Overall, a logit transformation of a binary outcome variable linearizes the non-linear relationship of X with the probability of Y. It does so as the logit transformation: eliminates the upper & lower probability ceilings of the binary variable; & is symmetric around the mid-range probability of 0.5, so that probabilities below this value have negative logits (i.e. logged odds) while those above this value have positive logits (i.e. logged odds).

20 Let’s summarize: the effect of X on the probability of binary Y is non-linear; but the effect of X on logit-transformed binary Y is linear. We call the latter either logit or logistic regression: they’re the same thing. Let’s fit a model using hsb2.dta:

21 . logit hsci read math female, nolog
Logit estimates Number of obs = LR chi2(3) = Prob > chi = Log likelihood = Pseudo R = hsci | Coef Std. Err z P>|z| [95% Conf. Interval] read | math | female | _cons |

22 How, though, do we interpret logit coefficients in meaningful terms?
That is, how do we interpret ‘logged odds’? The gain in parsimony via the logit transformation is mitigated by the loss in interpretability: the metric of logged odds (i.e. logits) is not instrinsically meaningful to us.

23 An alternative, more comprehensible approach is to express regression coefficients not as not logged odds but rather as odds: odds = event’s probability/1 – event’s probability

24 The odds are obtained by taking the exponent, or anti-log, of the logged odds (i.e. the logit coefficient): . odds of honors math: di exp( ) = .325 . odds of not honors math: di exp(1.126) =3.083 Review: interpretation?

25 = event’s probability/1 - event’s probability
What are the odds of being in honors math? odds: a ratio of probabilities = event’s probability/1 - event’s probability

26 odds ratio: a ratio of odds
What are the odds of being in honors math versus the odds of not being in honors math? This is called an odds ratio. odds ratio: a ratio of odds odds ratio = .325/3.083 = .105 Interpretation? The odds of being in honors math are .105 those of not being in honors math.

27 Via logit or logistic regression, Stata gives us slope coefficients as odds, instead of logged odds, in any of the following ways: (1) logit hsci read math female, or nolog (2) logistic hsci read math female, nolog (3) quietly logit hsci read math female listcoef, factor help

28 But expressing slope coefficient as odds, instead of logged odds, causes a complication: the equation determining the odds is not additive but rather multiplicative.

29 . logit hsci read math female, or nolog
Logit estimates Number of obs = LR chi2(3) = Prob > chi2 = Log likelihood = Pseudo R = hsci | Odds Ratio Std. Err z P>|z| [95% Conf. Interval] read | math | female |

30 For every 1-unit increase in reading score, the odds of being in honors science increase by the multiple of 1.07 on average, holding the other variables constant. Every 1-unit increase in math score, the odds of being in honors science by the factor of 1.10 on average, holding the other variables constant. The odds of being in honors science is lower for females than males by a multiple of .338 on average, holding the other variables constant.

31 . logit hsci read math female, or nolog
Logit estimates Number of obs = LR chi2(3) = Prob > chi2 = Log likelihood = Pseudo R = hsci | Odds Ratio Std. Err z P>|z| [95% Conf. Interval] read | math | female |

32 The Metrics logit = 0 is the equivalent of odds=1 & the equivalent of probability = .5

33 Regarding odds ratios: an odds ratio of
Regarding odds ratios: an odds ratio of .5, which indicates a negative effect, is of the same magnitude as a positive-effect odds ratio of 2.0. Here’s a helpful way to disentangle this complication after estimating a logit (i.e. logistic) model: . listcoef, reverse help This reverses the outcome variable.

34 . listcoef, reverse help [So that the coefficients refer to the odds of not being in honors science.] logit (N=200): Factor Change in Odds Odds of: 0 vs 1 hsci | b z P>|z| e^b e^bStdX SDofX read | math | female | b = raw coefficient z = z-score for test of b=0 P>|z| = p-value for z-test e^b = exp(b) = factor change in odds for unit increase in X e^bStdX = exp(b*SD of X) = change in odds for SD increase in X SDofX = standard deviation of X

35 listcoef, reverse related the explanatory variables to the odds of not being in hsci.

36 . quietly logit hsci read math female . listcoef, percent help
An easier interpretation than odds ratios is percentage change in odds: . quietly logit hsci read math female . listcoef, percent help hsci | b z P>|z| % %StdX SDofX read | math | female | P>|z| = p-value for z-test % = percent change in odds for unit increase in X %StdX = percent change in odds for SD increase in X SDofX = standard deviation of X

37 For every 1-unit increase in reading score, the odds of being in honors science increase by 7.3% on average, holding the other variables constant. For every 1-unit increase in math score, the odds of being in honors science increase by 10.3% on average, holding the other variables constant. The odds of being in honors science are lower by 66.2% on average for females than males, holding the other variables constant.

38 The percentage interpretation, then, eliminates the multiplicative aspect of the model.

39 Alternatives to know about are ‘relative risk’ & ‘relative risk ratio’.
See, e.g., Utts, chap. 12. And see the downloadable command ‘relrisk’. . logit hsci read math female, nolog . relrisk

40 Pseudo-R2: this is not equivalent to OLS R2.
Many specialists (e.g., Pampel) recommend not to report pseudo-R2 . Its metric is different from OLS R2—typically it’s much lower than OLS R2—but readers (including many academic & policy specialists) are not aware of this difference.

41 In the logistic equation we could have specified the options robust &/or cluster (as in OLS regression). Specifying ‘cluster’ automatically invokes robust standard errors.

42 logit=0 probability=0.5 Before we proceed, keep in mind the following:
odds=1 is the equivalent of: logit=0 probability=0.5

43 Besides logits (i.e. logged odds) & odds, we can also interpret the relationships from the perspective of probabilities. Recall, though, that the the relationship between X & the probability of binary Y is non-linear. Thus the effect of X on binary Y has to be identified at particular X-values or at a particular sets of values for X’s; & we compare the effects across particular X-levels.

44 . logit hsci read math female, or nolog . prvalue, r(mean) delta brief
logit: Predictions for hsci Pr(y=1|x): % ci: (0.0998,0.2259) Pr(y=0|x): % ci: (0.7741,0.9002) prvalue, like the rest of the pr-commands, can only be used after estimating a regression model. It summarizes the samples 1/0 probabilities for the specified binary y-variable holding all of the data set’s (not just the model’s) other variables at their means (though medians can be specified alternatively).

45 . prvalue, x(female=0) r(mean) delta brief
logit: Predictions for hsci Pr(y=1|x): % ci: (0.1568,0.3622) Pr(y=0|x): % ci: (0.6378,0.8432) . prvalue, x(female=1) r(mean) delta brief Pr(y=1|x): % ci: (0.0526,0.1784) Pr(y=0|x): % ci: (0.8216,0.9474)

46 Comparing males across particular scores:
. prvalue, x(read=40 math=40 female=0), delta b save . prvalue, x(read=60 math=60 female=0), delta b dif How else could we compare the estimated probabilities of being in honors science? By comparing female=0 versus female=1 at particular reading & math scores. Note: ‘b’ – ‘brief’ (i.e. display only the model’s most relevant values)

47 Comparing males & females at particular scores:
. prvalue, x(read=40 math=40 female=0), delta b save . prvalue, x(read=40 math=40 female=1), delta b dif Or try prtab to see, e.g., how female versus male estimated probabilities vary across the range of math scores, holding reading scores constant at 40 (but does not provide a confidence interval): . prtab math female, x(read=40) brief

48 Or try prchange to see, e. g
Or try prchange to see, e.g., how female estimated probabilities vary as female math scores increase from 40 to 60 (which, however, does not provide a confidence interval): . prchange math, x(female=1 math=40) fromto delta(20) uncentered brief

49 A problem with prtab & prchange is that they don’t give confidence intervals, which prvalue delta does provide. Here’s a different way of making predictions—for logged odds, odds, or probabilities—that gives confidence intervals: . adjust math=40, by(female) ci . adjust math=40, by(female) exp ci . adjust math=40, by(female) pr ci Note: the first variant can be used to obtain predicted coefficients with OLS regression as well.

50 logits (i.e. logged odds) odds; or probabilities
Remember: we can examine the relationship of X with binary Y via: logits (i.e. logged odds) odds; or probabilities What are the differences in functional forms & interpretations?

51 While we’re at it, why not give probit a look?
. probit hsci read math, nolog . logit hsci read math, nolog Although the logit coefficients exceed the probit coefficients by a factor of about 1.8, the functional y/x relationship is virtually the same: it is very slightly different at the low & high ends because the probit transformation has thinner tails. But there are no odds or probabilities interpretations for probit models, which thus are harder to express in meaningful terms.

52 For all forms of categorical-dependent variable regression, the same options exist for explanatory variables—interactions, power transformations, & categorizing—as for OLS regression.

53 Estimation & Model Fit Because a dichotomous dependent variable violates OLS assumptions of normality & non-constant variance in residuals, models are estimated instead by maximum likelihood estimation. Maximum likelihood estimation finds estimates of model parameters that are most likely to give rise to the pattern of observations in the sample data.

54 Maximum likelihood estimation yields not an F-statistic & F-test for model fit but rather a log likelihood value & likelihood ratio test. The likelihood ratio test compares a model with the constant only (the baseline model) to the fitted (i.e. full) model: what’s the likelihood of giving rise to the sample estimates via the baseline model vs. the fitted (i.e. full) model? It then assesses the reduction in log likelihood value anchored by the specified degrees of freedom & computes a Chi-squared test of the reduction.

55 This is equivalent to the OLS F-statistic & F-test for model fit.
We use the likelihood ratio test to test nested models (via the command lrtest). And as with OLS we examine the p-values of individual coefficients, although in this case these are computed via the Wald-test.

56 Here we begin by testing the ‘full’ model (& storing its estimates) & then test ‘reduced models’ versus the full model: . logit hsci read write math female, or nolog . estimates store full . logit hsci read write, or nolog . lrtest full likelihood-ratio test LR chi2(2) = (Assumption: . nested in full) Prob > chi2 =

57 But if the models use robust standard errors, pweights, or cluster adjustment, the likelihood ratio test can’t be used. And of course, if the number of observations varies from one model to another, the likelihood ratio test can’t be used either. In such cases (aside from varying number of observations) use the Wald-test, which is what is applied in logistic regression via the test command:

58 . logit hsci read write math female, or nolog . test math female
chi2( 2) = Prob > chi2 = The likelihood ratio test is considered superior to the Wald-test; the results may differ somewhat.

59 There is a way to test non-nested models or nested models with unequal observations: BIC or AIC tests (see STATA Manual [‘estimates,’ table options]; Pampel; & Long/Freese). See Long/Freese’s downloadable ‘fitstat’ command & their discussions of BIC & AIC.

60 . logistic hsci read write math female, nolog
. fitstat, saving(m1) . est store m1

61 . logistic hsci read write, nolog
. fitstat, using(m1) bic . est store m2 . est table m1 m2, eform star( ) stats(N chi2 df_m bic)

62 BIC test criteria: smaller BIC indicates better-fitting model.
. difference 0-2: weak support for complete model . difference 2-6: positive support for complete model . difference 6-10: strong support for complete model . difference >10: very strong support for complete model

63 AIC criterion: smaller AIC*n indicates better-fitting model.
AIC, BIC, & fitstat can be used to test OLS regression models, too.

64 Beware: Pitfalls in Comparing Logistic Coefficients across Nested Models
Unlike Y in OLS, the error variance for Y in logistic changes with the addition/deletion of variables. Hence, unlike OLS, comparing the values of logistic coefficients is tricky. Check Stata listserv on this topic (e.g., commentaries by Richard Williams, Herb Smith, Roger Newsome).

65 If you need to compare the values of logistic coefficients across nested models, one recommended approach is to use Stata’s ‘suest’ (‘seemingly unrelated estimation’) command. Another approach is to standardize Y across the models (e.g., using ‘listcoef, std’).

66 State-of-the-Art on Using Logistic or Probit Regression:
Glenn Hoetker, “The Use of Logit and Probit Models in Strategic Management Research: Critical Issues” Hoetker’s Stata command ‘complogit’: . net from . findit oglm

67 Richard Williams’ Stata ‘oglm’ command:
. findit oglm

68 Data preparation is crucial for logistic regression.
Particular attention must be paid to any contingency table of the outcome variable with a categorical explanatory variable that has a zero cell or cell<5 observations, which can cause serious problems of model fitting Options for a zero cell or cell<5 observations: collapse the categories in some sensible way to eliminate the problem; eliminate the variable (if advisable on other grounds); or if the variable is ordinal, model it as if it were continuous.

69 A common problem arising from zero cells or low-count cells is ‘perfect prediction’ (see also ‘separation problem’ or ‘quasi-separation problem’). Stata provides a warning for ‘perfect prediction,’ & in general the standard errors & confidence intervals are inflated. There’s considerable literature on strategies for dealing with these problems, or consult Stata listserv.

70 In sum, for a contingency table of the outcome variable with a categorical explanatory variable:
No zero cells. No more than 20% of cells with less than 5 observations.

71 The following approach to data preparation for logistic regression is based on Hosmer & Lemeshow, Applied Logistic Regression (2ed).

72 Variable Selection Begin with a careful analysis of each variable. For nominal, ordinal, & continuous variables with few integer values, do a contingency table of the outcome variable (y=0, 1) versus the levels of the explanatory variable, checking for zero cells, cells with <5 obs., &, at this stage, for p<=.25. . tab hsci female, col chi2 . tab hsci race, col chi2 . tab hsci quintiles_math, col chi2

73 Take appropriate action to eliminate zero cells and minimize cells with <5 observations.
For continuous variables, a form of preparatory analysis is the two-sample ttest: . ttest read, by(hsci) unequal . ttest math, by(hsci) unequal

74 Likewise pertinent to continuous explanatory variables is a graph of x versus y=0, 1, to explore for possible curvilinearity: . sparl hsci write, quad | logx . twoway fpfit hsci write . gr7 hsci write, c(s) ba(8) . twoway mband hsci write, mband(8) . twoway lowess hsci write, bwidth(.6)

75 Choosing from those variables having p<=
Choosing from those variables having p<=.25, conceptualize & list a set of potential explanatory variables. Discuss their anticipated relationship to outcome variable Y , including anticipated linearity or curvilinearity & relationships with the other explanatory variables. Order the potential explanatory variables in terms of their conceptual importance in relation to Y.

76 Fit a preliminary main-effects model, dropping—for now—those variables that test insignificant.
Explore & add pertinent higher-order terms, dropping—for now—those higher-order terms that test insignificant. Explore & add pertinent interaction terms, dropping—for now—those interaction terms that test insignificant.

77 Add those variables that previously tested insignificant, dropping—in the context of substantive judgement, including retention of appropriate controls—those variables that now test insignificant but whose absence doesn’t practically effect the other explanatory variables. One by one, drop & re-add each explanatory variable, checking for changes in coefficients (direction, size) & p-values. The set of substantively meaningful & statistically stable variables that’s left is the preliminary final model.

78 Conduct the diagnostic tests.
Re-add all variables that are conceptually/theoretically relevant: this is the final, complete model. Conduct the diagnostic tests. Test nested models (keeping the previously discussed caveats in mind), conducting the diagnostic tests for each model.

79 Let’s work through an example of logistic regression, using toxic.dta.
Toxic wastes contaminated the grounds of two public schools in a small Vermont town (Hamilton, Regression with Graphics, chap. 7). Some residents thought the schools should be closed until proven safe (which would involve high financial costs), while others though the schools should stay open. Is the following model helpful in explaining the residents’ opinions?

80 Let’s assume that we’ve done the preparatory analysis, as outlined above.
. logistic close female kids nodad educ lived contam hsc, nolog quantitative—educ (years); lived(years in town) categorical—female; kids: kids<19 years old living in town; contam (whether believes his/her own property was contaminated); hsc: whether respondent attended town Health & Safety meeting interaction—nodad: 1=interaction of male with no children in town, else=0

81 Multicollinearity: Our model includes two quantitative variables. We can begin by fitting an OLS regression with the binary outcome variable & then assess the possibility of multicollinearity by issuing the command ‘vif’. . reg close nodad educ lived contam hsc . vif There’s no collinearity problem. Try Stata’s ‘collin’ (download).

82 Now we’ll test nested models:

83 . logistic close female kids nodad educ lived contam hsc, nolog
Logit estimates Number of obs = LR chi2(7) = Prob > chi2 = Log likelihood = Pseudo R = close | Odds Ratio Std. Err z P>|z| [95% Conf. Interval] female | kids | nodad | educ | lived | contam | hsc |

84 . estimates store full . fitstat, saving(m1) brief

85 . logistic close nodad educ lived contam hsc, nolog
Logit estimates Number of obs = LR chi2(5) = Prob > chi2 = Log likelihood = Pseudo R = close | Odds Ratio Std. Err z P>|z| [95% Conf. Interval] nodad | educ | lived | contam | hsc |

86 . lrtest full likelihood-ratio test LR chi2(2) = (Assumption: . nested in full) Prob > chi2 = We fail to reject the null hypothesis & thus we select the reduced model.

87 Current Saved Difference
. fitstat, using(m1) bic Current Saved Difference Model: logistic logistic N: AIC: AIC*n: BIC: BIC': Difference of in BIC provides strong support for current model.

88 Caution in Comparing the Logistic Coefficients across Nested Models
Keep in mind the pitfalls of comparing logistic coefficients across nested models. Try experimenting with ‘suest’ and with standardizing Y (‘listcoef, std’).

89 . quietly logistic close nodad educ lived contam hsc
Continuing, let’s display the coefficients as percentage change in odds: . quietly logistic close nodad educ lived contam hsc . listcoef, percent help

90 . listcoef, percent help logit (N=153): Percentage Change in Odds
Odds of: close vs open close | b z P>|z| % %StdX SDofX nodad | educ | residence | contam | hsc |

91 Next we’ll look at model specification:
. linktest, nolog

92 There’s no problem at all.
. linktest, nolog Logit estimates Number of obs = LR chi2(2) = Prob > chi2 = Log likelihood = Pseudo R = close | Coef. Std. Err z P>|z| [95% Conf. Interval] _hat | _hatsq | _cons | There’s no problem at all.

93 If linktest did test significant, we’d reconsider the model, such as whether or not the outcome variable is notably skewed & the possible need to add, transform, categorize, or drop explanatory variables. It the outcome variable were notably skewed, we’d consider alternatives such as complementary log-log (cloglog) or scobit regression (see Long & Freese; Pampel; UCLA-ATS Stata web site; STATA Manuals).

94 Let’s now consider summary measures of model fit: ‘ldev’ & ‘estat gof, group(#)’. These reflect cell patterns of observed versus fitted values (i.e. residuals), as in contingency tables. We want the tests to test insignificant so that we can fail to reject the null hypothesis that the model correctly predicts the cell-pattern of the data. Covariate pattern: the set of values that corresponds to each combination of levels of the model’s covariates (i.e. explanatory variables); i.e. a configuration of explanatory variables & their values.

95 . ldev Logistic model deviance goodness-of-fit test
number of observations = number of covariate patterns = deviance goodness-of-fit = degrees of freedom = Prob > chi2 =

96 . estat gof, group(10) table
Logistic model for close, goodness-of-fit test (Table collapsed on quantiles of estimated probabilities) number of observations = number of groups = Hosmer-Lemeshow chi2(8) = Prob > chi2 = Adjust the groups (to no fewer than 6) so that the number of observations is relatively large (5+) in each cell.

97 If the summary measures of model fit did test significant, then we’d reconsider the model: re-check contingency tables of the outcome variable and categorical explanatory variables for zero cells & cells with <5 obs, & take remedial action; re-check whether comp log-log (cloglog) or scobit, or some other binary statistic, should be used; & re-consider the explanatory variables.

98 While summary indicators of model fit are helpful, they overlook information about a model’s various components. E.g., even when the overall model fits the data, there may be some interesting deviation from fit for a portion of the observations. So, before concluding that a model fits, we need to inspect the residual diagnostics for the covariate patterns.

99 Regarding the diagnostics for individual observations & model fit, dd=deviance (i.e. residuals for y); hat=leverage (i.e. x-outliers); dx2 =chi2-measure of observaton fit; db=influence of each x observation on the beta coefficients in general; & n=identities of the covariate patterns.

100 We begin by predicting the probability of outcome variable Y , followed by predicting the residual diagnostic indicators: . predict p if e(sample) . predict dd if e(sample), dd . predict h if e(sample), hat . predict db if e(sample), db . predict dx2 if e(sample), dx2 . predict n if e(sample), n

101 . su p-n Variable | Obs Mean Std. Dev Min Max p | dd | hat | db | dx2 | n | Is dx2>4? Is dd or db>1? These levels are rough indicators only (see Hosmer & Lemeshow).

102 . scatter dx2 p, yline(4) ml(n)

103 . scatter dx2 p [w=db], yline(4)

104 . scatter db p, yline(1) ml(n)

105 . scatter db p [w=dx2], yline(1)

106 .l n dd h dx db if n==60 | n==11 | n==53 | n==109

107 Explanatory Variables
quantitative—educ (years); lived(years in town) categorical—female; kids: kids<19 years old living in town; contam (whether believes his/her own property was contaminated); hsc: whether respondent attended town Health & Safety meeting interaction—nodad: 1=interaction of male with no children in town, else=0

108 . l n close nodad educ lived contam hsc if n==60 | n==11 | n==53 | n==109
What about these covariate patterns makes them outliers? That is, how do they deviate from the model’s predictions? n close nodad educ lived contam hsc open no yes yes open no yes yes close no no no close yes yes no

109 What is it about each of these covariate patterns that the model can’t explain? What insights do these deviations yield concerning the model? Should we take remedial action of some sort, such as the following?

110 Possible Remedial Action
Check for & correct data errors. Possibly incorporate: omitted variables interactions log or other transformations Consider categorizing quantitative explanatory variables. Consider other actions to temper outliers.

111 For our immediate purposes, we’re satisfied, on balance, that the model fits well.

112 What about testing for unequal slopes via a more or less ‘full interaction’ model?
Doing so it routine in OLS regression, but is hugely risky with a categorical dependent variable. This is essentially due to the nonlinearity of logistic & other such models.

113 On the pitfalls of exploring for unequal slopes (i. e
On the pitfalls of exploring for unequal slopes (i.e. comparing coefficients across groups) with a categorical dependent variable, see Allison “Comparing Logit and Probit Coefficients Across Groups” (1999); Norton et al., “Computing Interaction Effects and Standard Errors in Logit and Probit Models” (2004); Hoetker, “Confounded Coefficients: Accurately Comparing Logit and Probit Coefficients Across Groups. Here’s a lucid review of the literature by Richard Williams:

114 STATA commands that address the problem: oglm (by Richard Williams, Notre Dame); inteff (by Norton et al., UNC Chapel Hill); complogit (by Glenn Hoetker, U. Illinois); vibl (by Michael Mitchell, ATS/UCLA) In STATA: findit…

115 Estimate a separate model for each category (e.g., male vs. female).
A low-tech, common-sense approach to the problem: Estimate a separate model for each category (e.g., male vs. female). Compare the patterns of significance. When an explanatory variable is significant in both models, consider the magnitudes of their coefficients to be significantly different from each other if their confidence intervals do not overlap.

116 . findit xi3 [then download] . findit postgr3 [then download]
Recalling the cautionary remarks, here’s a nifty way to check out possible interactions graphically via a downloaded command (xi3): . findit xi3 [then download] . findit postgr3 [then download] . help postgr3 . xi3: logistic close nodad educ lived contam hsc, nolog

117 . postgr3 lived, by(contam)
No interaction.

118 . postgr3 educ, by(contam)
No interaction.

119 postgr3 can be used in OLS & other kinds of regression analysis, too.

120 Let’s next explore estimated probabilities for particular levels of the explanatory variables.
We’ll begin by obtaining the baseline estimated probabilities of Y=0, 1.

121 . prvalue, delta r(mean) logit: Predictions for close Pr(y=close|x): % ci: (0.3141,0.5159) Pr(y=open|x): % ci: (0.4841,0.6859) nodad educ lived contam hsc x= prvalue: the estimated probability for an average respondent (i.e. holding each explanatory variable constant at its mean).

122 Strategy Determine the range of values for independent variables having relatively wide ranges of estimated probability (or in cases of extreme X-outliers). Find the extent to which change in one X-variable affects the estimated probability by allowing one X-variable to vary from its minimum to maximum while holding the others constant. Emphasize y/x relations with relatively wide ranges of estimated probability.

123 . prchange, fromto r(mean) brief help
Let’s next inspect the range of each X’s effects on estimated probabilities that respondents favor closing the town’s schools, holding each of the other explanatory variables constant at its mean: . prchange, fromto r(mean) brief help from: to: dif: from: to: dif: x=min x=max min->max x= x= >1 nodad educ lived contam hsc

124 . prtab educ contam, r(mean) brief
logit: Predicted for close What are the estimated probabilities of ‘close’ associated with a cross-classification of 2-4 explanatory variables: respondent’s education level & opinion that her/his own property is or isn’t contaminated: Note: better to categorize a continuous variable such as ‘educ’.

125 highest Yr contam compl no yes

126 The estimated probabilities of ‘close’ according to whether respondent attended 2+ town meetings on the school problem & where respondent’s property is reportedly contaminated: . prtab hsc contam, r(mean) brief logit: Predicted for close | believe own Attend 2+ | property/water HSC | contam meetings? | no yes no | yes |

127 Next let’s compare particular configurations of respondent traits:
. prvalue, x(educ=12 contam=0 hsc=0) delta r(mean) brief save . prvalue, x(educ=16 contam=0 hsc=0) delta r(mean) brief dif

128 Finally, let’s graph the relationship of years living in town by years education to estimated probability of opinion that the contaminated schools should be closed. We’ll use the ‘prgen’ command to create pseudo-variable data to be graphed.

129 . prgen lived, from(1) to(81) x(educ=10) gen(p10) n(11)
. la var p10p1 "10 years educ" . la var p12p1 "12 years educ" . la var p14p1 "14 years educ" . la var p16p1 "16 years educ" . la var p18p1 "18 years educ" . la var p20p1 "20 years educ"

130 . scatter p10p1 p12p1 p14p1 p16p1 p18p1 p20p1 p20x, c(l l l l l l) title(“Opinion by Years Residence & Years Education”, box bexpand) l2title(Pr(Open|Close)) yvar(“”) xvar(“Years residence in town”) legend(c(ltkhaki))

131

132 See Long/Freese, Pampel, the UCLA-ATS Stata website, & the Stata manuals for alternatives to logistic regression. Recall that probit generally gives the same results as logistic but gives results in probit coefficients only. complementary log-log (cloglog in Stata) is for highly skewed outcome variables, but doesn’t give odds ratios. scobit, which is used for the same purpose, does give odds ratios but can be tricky to use.

133 We have not discussed predicting probabilities with curvilinear explanatory variables.
On this topic, see Long/Freese, chap. 8.

134 Finally, models with binary outcome variables are useful in exploring patterns of missing values: is the pattern random or does it reflect bias of some kind or another? . u hsb2_miss, clear . egen mvals=rmiss(_all) . tab mvals . gen mval=(mvals>=1 & mvals<.) . tab mval

135 . tab mval female, col chi2 . tab mval ses, col chi2 . tab mval race, col chi2 . tab mval schtyp, col chi2 . ttest read, by(mval) unequal . ttest math, by(mval) unequal . xi: logistic mval female i.ses i.race i.schtyp, nolog

136 See Allison, Missing Data (Sage Publications).
In sum, use logistic regression or an alternative to analyze the pattern of missing values. The topic of missing values has received increased attention, as its consequences for data analysis are commonly ignored but can be serious in terms of bias. See Allison, Missing Data (Sage Publications).

137 And see Gary King et al., “Analyzing Incomplete Political Science Data…,” APSR (March 2001); & King et al., “Amelia: A Program for Imputing Missing Data.” Note: King et al.’s downloadable-to-Stata program “Clarify” is an alternative way to obtain predicted probabilities.

138 One last question: are there any conditions under which you might start out doing OLS regression & decide to use logistic regression instead?

139 Remember: State-of-the-Art on Using Logistic or Probit Regression:
Glenn Hoetker, “The Use of Logit and Probit Models in Strategic Management Research: Critical Issues”


Download ppt "VI. Logistic Regression"

Similar presentations


Ads by Google