Presentation is loading. Please wait.

Presentation is loading. Please wait.

Notes on Logistic Regression

Similar presentations


Presentation on theme: "Notes on Logistic Regression"— Presentation transcript:

1 Notes on Logistic Regression
STAT 4330/8330

2 Introduction Previously, you learned about odds ratios (OR’s). We now transition and begin discussion of binary logistic regression. We will see that OR’s play an important role in the results of binary logistic models.

3 Binary Logistic Regression
Binary Logistic Regression is an appropriate when: The response variable is categorical w/ 2 categories (binary, dichotomous, etc.). The response categories are often generically labeled “success” or “failure”. One or more explanatory variables are involved. These can be either quantitative or categorical or a mixture of both. One is interested in assessing the relationship between the binary response and the explanatory variables and/or predicting the response category based on the value(s) of the explanatory variable(s).

4 The Model Equation

5 The Model Equation A few points:
E(y) can never fall below 0 or above 1 (Remember: it is a probability!). The model is not a linear function of the β parameters. This is a type of nonlinear regression model.

6 The Model Function

7 The Model Equation Alternatively, the equation can be transformed to show that it models the natural logarithm of the odds of y = 1.

8 The Model Equation The left side is called the “logit”

9 The Model Equation In general, the bi estimates the change in the log-odds when xi is increased by 1 unit, holding all other x’s in the model fixed. Therefore, exp(bi) estimates the OR of a success for each additional 1-unit increase in xi. Furthermore, (exp(bi)-1)*100 gives the percent increase in the odds of a success for each 1-unit increase in xi.

10 Example: The Outbreak Data
The Outbreak data contain a sample of N = 196 persons in 2 neighborhoods (sectors) of a large city during a disease outbreak. Can we predict whether or not a person contracts the disease? We will begin with a simple binary logit model (with 1 predictor = age).

11 Example: The Outbreak Data.
Through SAS PROC LOGISTIC, we find that b1 = Therefore, OR = exp(.0285) = 1.029, indicating that a person’s odds of contracting the disease increase times for every year they age.

12 Example: The Outbreak Data.
Furthermore, we can state that the odds of contracting the disease increase by 2.89% with each additional year in age. (exp(.0285)-1)*100% = 2.89%.

13 Example: The Outbreak Data.
We can transform these results to discuss the increase in odds in 5 & 10 year increments by the following: exp(cbi) = the OR when there is a difference of c units.

14 Example: The Outbreak Data.
Therefore: (exp(5*.0285)-1)*100% = 15.32% (exp(10*.0285)-1)*100% = 32.98% As a result then, a person’s odds of getting the disease increase by 15.32% for every additional 5 years in age.

15 Model Fit We ended last session fitting a simple (1-predictor) binary logit model to the Outbreak data using SAS. We will now continue covering the SAS PROC LOGISTIC output.

16 Model Fit Statistics All of these statistics assess the model fit through the quality of the explanatory capacity of the model.

17 Model Fit Statistics -2 Log L The -2 Log-Likelihood is a transformation of the Likelihood function (L). L is a quantification of how well the model fits the sample data.

18 Model Fit Statistics Both AIC & SC are deviants of the -2 Log L that penalize for model complexity (the number of predictor variables).

19 Model Fit Statistics AIC Akaike Information Criterion. Used to compare non-nested models. Smaller is better. AIC is only meaningful in relation to another model’s AIC value.

20 Model Fit Statistics SC Schwarz Criterion. Very much like AIC, however the penalization is different. SC tends to favor simpler models than AIC.

21 Model Fit Statistics Choose either AIC or SC (not both) and use the values under the heading ‘Intercept and Covariates’ to compare to competing models.

22 The model equation.

23 Inference: The Coefficients.
Instead of a t-test for the significance of a coefficient (like in linear regression), we have a Wald Chi-Squared test.

24 Inference: The Coefficients.
Remember, typically we do not evaluate the intercept, but rather focus on the test for each predictor.

25 Inference: The Coefficients.
In this case, age is a statistically significant predictor of disease status at the α = .05 level, X2(1) = 11.53, p =

26 Inference: The Coefficients.
One can also obtain CI’s for the parameter estimates using CL option in the MODEL statement of PROC LOGISTIC.

27 Inference: The Coefficients.
As we found in linear regression, we can conclude that a given predictor is statistically significant at the α = .05 if the 95% CI does not include the null value of 0.

28 Inference: The Coefficients.
Therefore, our best estimate of the change in the log-odds for age is , however, we are 95% confident that that change lies between and for the population.

29 Inference: The Coefficients.
Furthermore: exp(.0285) = exp(.0120) = exp(.0449) = Therefore, we estimate a person’s odds of contracting the disease increase times for every year they age and we are 95% confident that this increase ranges between (1.012,1.046) for the pop.

30 Inference: The Coefficients.
Of course, we no longer have to compute these odds ratio estimates by hand, because SAS provides them for us.

31 Inference: The Coefficients.
Furthermore: (exp(.0285)-1)*100% = 2.89%. (exp(.0120)-1)*100% = 1.21% (exp(.0449)-1)*100% = 4.59% We can state that the odds of contracting the disease increase by 2.89% with each additional year in age and we are 95% confident that this increase ranges between (1.21%,4.59%) for the pop.

32 Final Note: Model Fitting
Realize that in order to estimate the model parameters, the data must consist of a substantial number of each response category. For example, one will not be able to estimate the risk of contracting a disease if the data set does not contain any individuals who have been diagnosed with the disease.

33 Final Note: Model Fitting
Essentially, then, in order to estimate the probability of either a success or failure, the data set must contain a substantial number (> 30 is best) of observations that experienced a success and a substantial number that experienced a failure.

34 More about output. PROC LOGISTIC provides more information concerning how the model fits the sample data.

35 More about Model Fit Percent Concordant A pair of observations with different observed responses is considered concordant if the observation with the lower ordered response value has a lower predicted value than the observation with a higher ordered response value.

36 More about Model Fit Percent Discordant A pair is considered discordant if an observation with a lower ordered response value has a higher predicted value than an observation with a higher order response.

37 More about Model Fit Percent Tied A pair with different responses is considered tied if it is neither concordant nor discordant.

38 More about Model Fit Somer’s D, Gamma, & Tau-a These are statistics that measure the strength and direction of the relationship between pairs.

39 More about Model Fit Somer’s D & Tau-a Like r, these vary between -1.0 (all pairs discordant) & +1.0 (all pairs are concordant). Somer’s D = the difference between the % concordant and the % discordant * 100.

40 More about Model Fit Gamma Gamma is a similar statistic: it’s values also range between -1.0 & +1.0, however the interpretation of these values is different: -1.0 = no association & = perfect association.

41 Predicted Values The output of a logit model is the predicted probability of a success for each observation.

42 Predicted Values These are obtained and stored in a separate SAS data set using the OUTPUT statement (see the following code).

43 Predicted Values PROC LOGISTIC outputs the predicted values and 95% CI limits to an output data set that also contains the original raw data.

44 Predicted Values Use the PREDPROBS = I option in order to obtain the predicted category (which is saved in the _INTO_ variable).

45 Predicted Values _FROM_ = The observed response category = The same value as the response variable.

46 Predicted Values _INTO_ = The predicted response category.

47 Predicted Values IP_1 = The Individual Probability of a response of 1.

48 Scoring Observations in SAS
Obtaining predicted probabilities and/or predicted outcomes (categories) for new observations (i.e., scoring new observations) is done in logit modeling using the same procedure we used in scoring new observations in linear regression.

49 Scoring Observations in SAS
Create a new data set with the desired values of the x variables and the y variable set to missing. Merge the new data set with the original data set. Refit the final model using PROC LOGISTIC using the OUTPUT statement.

50 Classification Table & Rates
A Classification Table is used to summarize the results of the predictions and to ultimately evaluate the fitness of the model. Obtain a classification table using PROC FREQ.

51 Classification Table & Rates
The observed (or actual) response is in rows and the predicted response is in columns.

52 Classification Table & Rates
Correct classifications are summarized on the main diagonal.

53 Classification Table & Rates
The total number of correct classifications (i.e., ‘hits’) is the sum of the main diagonal frequencies. O = = 139

54 Classification Table & Rates
The total-group hit rate is the ratio of O and N. HR = 139/196 = .698

55 Classification Table & Rates
Individual group hit rates can also be calculated. These are essentially the row percents on the main diagonal.


Download ppt "Notes on Logistic Regression"

Similar presentations


Ads by Google