Presentation is loading. Please wait.

Presentation is loading. Please wait.

November 5, 2008 Logistic and Poisson Regression: Modeling Binary and Count Data LISA Short Course Series Mark Seiss, Dept. of Statistics.

Similar presentations


Presentation on theme: "November 5, 2008 Logistic and Poisson Regression: Modeling Binary and Count Data LISA Short Course Series Mark Seiss, Dept. of Statistics."— Presentation transcript:

1 November 5, 2008 Logistic and Poisson Regression: Modeling Binary and Count Data LISA Short Course Series Mark Seiss, Dept. of Statistics

2 Presentation Outline 1.Introduction to Generalized Linear Models 2.Binary Response Data - Logistic Regression Model 3.Count Response Data - Poisson Regression Model

3 Reference Material Categorical Data Analysis – Alan Agresti Examples found with SAS Code at www.stat.ufl.edu/~aa/cda/cda.html www.stat.ufl.edu/~aa/cda/cda.html Presentation and Data from Examples www.stat.vt.edu/consult/short_courses.html

4 Generalized Linear Models Generalized linear models (GLM) extend ordinary regression to non-normal response distributions. 3 Components Random – identifies response Y and its probability distribution Systematic – explanatory variables in a linear predictor function (Xβ) Link function – function (g(.)) that links the mean of the response (E[Y i ]=μ i ) to the systematic component. Model for i = 1 to n

5 Generalized Linear Models Why do we use GLM’s? Linear regression assumes that the response is distributed normally GLM’s allow us to analyze the linear relationship between predictor variables and the mean of the response variable when it is not reasonable to assume the data is distributed normally.

6 Generalized Linear Models Predictor Variables Two Types: Continuous and Categorical Continuous Predictor Variables Examples – Time, Grade Point Average, Test Score, etc. Coded with one parameter – β i x i Categorical Predictor Variables Examples – Sex, Political Affiliation, Marital Status, etc. Actual value assigned to Category not important Ex) Sex - Male/Female, M/F, 1/2, 0/1, etc. Coded Differently than continuous variables

7 Generalized Linear Models Categorical Predictor Variables cont. Consider a categorical predictor variable with L categories One category selected as reference category Assignment of Reference Category is arbitrary Variable represented by L-1 dummy variables Model Identifiability Two types of coding – Dummy and Effect

8 Generalized Linear Models Categorical Predictor Variables cont. Dummy Coding (Used in R) x k = 1 if predictor variable is equal to category k 0 otherwise x k = 0 for all k if predictor variable equals category I Effect Coding (Used in JMP) x k = 1 if predictor variable is equal to category k 0 otherwise x k = -1 for all k if predictor variable equals category I

9 Generalized Linear Models Saturated Model Contains a separate indicator parameter for each observation Perfect fit μ = y Not useful since there is no data reduction, i.e. number of parameters equals number of observations. Maximum achievable log likelihood – baseline for comparison to other model fits

10 Generalized Linear Models Deviance Let L(μ|y) = maximum of the log likelihood for the model L(y|y) = maximum of the log likelihood for the saturated model Deviance = D(y| μ) = -2 [L(μ|y) - L(y|y) ] Likelihood Ratio Statistic for testing the null hypothesis that the model is a good alternative to the saturated model Likelihood ratio statistic has an asymptotic chi-squared distribution with N – p degrees of freedom, where p is the number of parameters in the model. Allows for the comparison of one model to another using the likelihood ratio test.

11 Generalized Linear Models Nested Models Model 1 - model with p predictor variables {X 1, X 2, X 3,….,X p } and vector of fitted values μ 1 Model 2 - model with q<p predictor variables {X 1, X 2, X 3,….,X q } and vector of fitted values μ 2 Model 2 is nested within Model 1 if all predictor variables found in Model 2 are included in Model 1. i.e. the set of predictor variables in Model 2 are a subset of the set of predictor variables in Model 1 Model 2 is a special case of Model 1 - all the coefficients associated with X p+1, X p+2, X p+3,….,X q are equal to zero

12 Generalized Linear Models Likelihood Ratio Test Null Hypothesis: There is not a significant difference between the fit of two models. Null Hypothesis for Nested Models: The predictor variables in Model 1 that are not found in Model 2 are not significant to the model fit. Alternate Hypothesis for Nested Models - The predictor variables in Model 1 that are not found in Model 2 are significant to the model fit. Likelihood Ratio Statistic = -2* [L(y,u 2 )-L(y,u 1 )] = D(y,μ 2 ) - D(y, μ 1 ) Difference of the deviances of the two models Always D(y,μ 2 ) > D(y,μ 1 ) implies LRT > 0 LRT is distributed Chi-Squared with p-q degrees of freedom

13 Generalized Linear Models Likelihood Ratio Test cont. Later, we will use the Likelihood Ratio Test to test the significance of variables in Logistic and Poisson regression models.

14 Generalized Linear Models Theoretical Example of Likelihood Ratio Test 3 predictor variables – 1 Continuous (X 1 ), 1 Categorical with 4 Categories (X 2, X 3, X 4 ), 1 Categorical with 1 Category (X 5 ) Model 1 - predictor variables {X 1, X 2, X 3, X 4, X 5 } Model 2 - predictor variables {X 1, X 5 } Null Hypothesis – Variables with 4 categories is not significant to the model (β 2 = β 3 = β 4 = 0) Alternate Hypothesis - Variable with 4 categories is significant Likelihood Ratio Statistic = D(y,μ 2 ) - D(y, μ 1 ) Difference of the deviance statistics from the two models Chi-Squared Distribution with 5-2=3 degrees of freedom

15 Generalized Linear Models Model Selection 2 Goals:Complex enough to fit the data well Simple to interpret, does not overfit the data Study the effect of each predictor on the response Y Continuous Predictor – Graph P[Y=1] versus X Discrete Predictor - Contingency Table of P[Y=1] versus categories of X Unbalance Data – Few responses of one type Guideline – 10 outcomes of each type for each X terms Example – Y=1 for only 30 observations out of 1000 Model should contain no more than 3 X terms

16 Generalized Linear Models Model Selection cont. Multicollinearity Correlations among predictors resulting in an increase in variance Reduces the significance value of the variable Occurs when several predictor variables are used in the model Determining Model Fit Other criteria besides significance tests (i.e. Likelihood Ratio Test) can be used to select a model

17 Generalized Linear Models Model Selection cont. Determining Model Fit cont. Akaike Information Criterion (AIC) –Penalizes model for having many parameters –AIC = Deviance+2*p where p is the number of parameters in model Bayesian Information Criterion (BIC) –BIC = -2 Log L + ln(n)*p where p is the number of parameters in model and n is the number of observations

18 Generalized Linear Models Model Selection cont. Selection Algorithms Best subset – Tests all combinations of predictor variables to find best subset Algorithmic – Forward, Backward and Stepwise Procedures

19 Generalized Linear Models Best Subsets Procedure Run model with all possible combinations of the predictor variables Number of possible models equal to 2 p where p is the number of predictor variables Dummy Variables for categorical predictors considered together Ex) For a set of predictors {X 1, X 2, X 3 } runs models with sets of predictors {X 1, X 2, X 3 }, {X 1, X 2 }, {X 2, X 3 }, {X 1, X 3 }, {X 1 }, {X 2 }, {X 3 }, and no predictor variables. 2 3 = 8 possible models Most programs only allow for a small set of predictor variables Cannot be run in a reasonable amount of time 2 10 = 1024 models run for a set of 10 predictor variables

20 Generalized Linear Models Forward Selection Idea: Start with no variables in the model and add one at a time Step One: Fit model with single predictor variable and determine fit Step Two: Select predictor variable with best fit and add to model Step Three: Add each variable to the model one at a time and determine fit Step Four: If at least one variable produces better fit, return to step two If no variables produce better fit, use model Drawback: Variables Added to the model cannot be taken out.

21 Generalized Linear Models Backward Selection Idea: Start with all variables in the model and take out one at a time Step One: Fit all predictor variables in model and determine fit Step Two: Delete one variable at a time and determine fit Step Three: If the deletion of at least one variable produces better fit, remove variable that produces best fit when deleted and return to step 2 If the deletion of a variable does not produce a better fit, use model Drawback: Variables taken out of model cannot be added back in.

22 Generalized Linear Models Stepwise Selection Idea: Combination of forward and backward selection Forward Step then backward step Step One: Fit each predictor variable as a single predictor variable and determine fit Step Two: Select variable that produces best fit and add to model. Step Three: Add each predictor variable one at a time to the model and determine fit Step Four: Select variable that produces best fit and add to the model Step Five: Delete each variable in the model one at a time and determine fit Step Six: Remove variable that produces best fit when deleted Step Seven: Return to Step Two Loop until no variables added or deleted improve the fit.

23 Generalized Linear Models Summary 3 Components of the GLM Random (Y) Link Function (g(E[Y])) Systematic (x t β) Continuous and Categorical Predictor Variables Coding Categorical Variables – Effect and Dummy Coding Likelihood Ratio Test for Nested Models Test the significance of a predictor variable or set of predictor variables in the model. Model Selection – Best Subset, Forward, Backward, Stepwise

24 Generalized Linear Models Questions/Comments

25 Logistic Regression Consider a binary response variable. Variable with two outcomes One outcome represented by a 1 and the other represented by a 0 Examples: Does the person have a disease? Yes or No Who is the person voting for?McCain or Obama Outcome of a baseball game? Win or loss

26 Logistic Regression Logistic Regression Example Data Set Response Variable –> Admission to Grad School (Admit) 0 if admitted, 1 if not admitted Predictor Variables GRE Score (gre) –Continuous University Prestige (topnotch) –1 if prestigious, 0 otherwise Grade Point Average (gpa) –Continuous

27 Logistic Regression First 10 Observations of the Data Set ADMITGRETOPNOTCHGPA 138003.61 066013.67 080014 064003.19 152002.93 076003 056002.98 140003.08 054003.39 170013.92

28 Logistic Regression Consider the linear probability model where y i = response for observation i x i = 1x(p+1) matrix of covariates for observation i p =number of covariates GLM with binomial random component and identity link g(μ) = μ Issue: π(X i ) can take on values less than 0 or greater than 0 Issue: Predicted probability for some subjects fall outside of the [0,1] range.

29 Logistic Regression Consider the logistic regression model GLM with binomial random component and identity link g(μ) = logit(μ) Range of values for π(X i ) is 0 to 1

30 Logistic Regression Consider the logistic regression model And the linear probability model Then the graph of the predicted probabilities for different grade point averages: Important Note: JMP models P(Y=0) and effect coding is used for categorical variables

31 Logistic Regression

32 Interpretation of Coefficient β – Odds Ratio The odds ratio is a statistic that measures the odds of an event compared to the odds of another event. Say the probability of Event 1 is π 1 and the probability of Event 2 is π 2. Then the odds ratio of Event 1 to Event 2 is: Value of Odds Ratio range from 0 to Infinity Value between 0 and 1 indicate the odds of Event 2 are greater Value between 1 and infinity indicate odds of Event 1 are greater Value equal to 1 indicates events are equally likely

33 Logistic Regression Interpretation of Coefficient β – Odds Ratio cont. Link to Logistic Regression : Thus the odds ratio between two events is

34 Logistic Regression Interpretation of Coefficient β – Odds Ratio cont. Consider Event 1 is Y=0 given X and Event 2 is Y=0 given X+1 From our logistic regression model Thus the ratio of the odds of Y=0 for X and X+1 is

35 Logistic Regression Single Continuous Predictor Variable - GPA Generalized Linear Model Fit Response: Admit Modeling P(Admit=0) Distribution: Binomial Link: Logit Observations (or Sum Wgts) = 400 Whole Model Test Model -LogLikelihoodL-R ChiSquareDFProb>ChiSq Difference6.5044483913.008910.0003 Full243.48381 Reduced249.988259 Goodness Of Fit StatisticChiSquareDFProb>ChiSq Pearson401.17063980.44603980.4460 Deviance486.96763980.00153980.0015

36 Logistic Regression Single Continuous Predictor Variable – GPA cont. Effect Tests SourceDFL-R ChiSquareProb>ChiSq GPA113.0088970.0003 Parameter Estimates Term EstimateStd ErrorL-R ChiSquareProb>ChiSqLower CLUpper CL Intercept-4.3575871.035317519.117873<.0001-6.433355-2.367383 GPA1.05110870.298869513.0088970.00030.47421761.6479411 Interpretation of the Parameter Estimate: Exp{1.0511087} = 2.86 = odds ratio between the odds at x+1 and odds at x for all x The ratio of the odds of being admitted between a person with a 3.0 gpa and 2.0 gpa is equal to 2.86 or equivalently the odds of the person with the 3.0 is 2.86 times the odds of the person with the 2.0.

37 Logistic Regression Single Categorical Predictor Variable – Top Notch Generalized Linear Model Fit Response: Admit Modeling P(Admit=0) Distribution: Binomial Link: Logit Observations (or Sum Wgts) = 400 Whole Model Test Model -LogLikelihoodL-R ChiSquareDFProb>ChiSq Difference3.539846927.079710.0078 Full246.448412 Reduced249.988259 Goodness Of Fit StatisticChiSquareDFProb>ChiSq Pearson400.00003980.4624 Deviance492.89683980.0008 I

38 Logistic Regression Single Categorical Predictor Variable – Top Notch cont. Effect Tests SourceDFL-R ChiSquareProb>ChiSq TOPNOTCH17.07969390.0078 Parameter Estimates Term EstimateStd ErrorL-R ChiSquareProb>ChiSqLower CLUpper CL Intercept-0.5258550.13821714.4460850.0001-0.799265-0.255667 TOPNOTCH[0]-0.3717050.1382177.07969380.0078-0.642635-0.099011 Interpretation of the Parameter Estimate: Exp{2*-.371705} = 0.4755 = odds ratio between the odds of admittance for a student at a less prestigous university and the odds of admittance for a student from a more prestigous university. The odds of being admitted from a less prestigous university is.48 times the odds of being admitted from a more prestigous university. I

39 Logistic Regression Variable Selection– Likelihood Ratio Test Consider the model with GPA, GRE, and Top Notch as predictor variables Generalized Linear Model Fit Response: Admit Modeling P(Admit=0) Distribution: Binomial Link: Logit Observations (or Sum Wgts) = 400 Whole Model Test Model -LogLikelihoodL-R ChiSquareDFProb>ChiSq Difference10.923450421.84693<.0001 Full239.064808 Reduced249.988259 Goodness Of Fit StatisticChiSquareDFProb>ChiSq Pearson396.91963960.4775 Deviance478.12963960.0029

40 Logistic Regression Variable Selection– Likelihood Ratio Test cont. Effect Tests SourceDFL-R ChiSquareProb>ChiSq TOPNOTCH12.21436350.1367 GPA14.29097530.0383 GRE15.45554840.0195 Parameter Estimates Term EstimateStd ErrorL-R ChiSquareProb>ChiSqLower CLUpper CL Intercept-4.3822021.135222415.917859<.0001-6.657167-2.197805 TOPNOTCH[0]-0.2186120.14592662.21436350.1367-0.5035830.070142 GPA0.66755560.32525934.29097530.03830.03569561.3133755 GRE0.00247680.00107025.45554840.01950.00039620.0046006

41 Logistic Regression Model Selection – Forward Stepwise Fit Response: Admit Stepwise Regression Control Prob to Enter0.250 Prob to Leave0.100 Direction: Rules: Current Estimates -LogLikelihoodRSquare 239.064810.0437

42 Logistic Regression Model Selection – Forward cont. ParameterEstimatenDFWald/Score ChiSq"Sig Prob" Intercept[1]-4.3821986101.0000 GRE0.0024768315.3560220.0207 GPA0.6675551114.2122580.0401 TOPNOTCH{1-0}0.2186118112.2442860.1341 Step History Step ParameterActionL-R ChiSquare"Sig Prob"RSquarep 1GREEntered13.920380.00020.02782 2GPAEntered5.7121570.01680.03933 3TOPNOTCH{1-0}Entered2.2143630.13670.04374

43 Logistic Regression Model Selection – Backward Start by selecting to enter all variables into the model Stepwise Fit Response: Admit Stepwise Regression Control Prob to Enter0.250 Prob to Leave0.100 Direction:Backward Rules: Combine

44 Logistic Regression Model Selection – Backward cont. Current Estimates -LogLikelihoodRSquare 240.171990.0393 ParameterEstimatenDFWald/Score ChiSq"Sig Prob" Intercept[1]-4.9493751101.0000 GRE0.0026906816.4739780.0109 GPA0.7546864115.5764610.0182 TOPNOTCH{1-0}012.2597290.1328 Step History Step ParameterActionL-R ChiSquare"Sig Prob"RSquarep 1TOPNOTCH{1-0}Removed2.2143630.13670.03933

45 Logistic Regression Variable Selection – Stepwise Stepwise Fit Response: Admit Stepwise Regression Control Prob to Enter0.250 Prob to Leave0.250 Direction:Mixed Rules: Combine Current Estimates -LogLikelihoodRSquare 239.064810.0437

46 Logistic Regression Variable Selection – Stepwise cont. ParameterEstimatenDFWald/Score ChiSq"Sig Prob" Intercept[1]-4.3821986101.0000 GRE0.0024768315.3560220.0207 GPA0.6675551114.2122580.0401 TOPNOTCH{1-0}0.2186118112.2442860.1341 Step History StepParameterActionL-R ChiSquare"Sig Prob"Rsquarep 1GREEntered13.920380.00020.02782 2GPAEntered5.7121570.01680.03933 3TOPNOTCH{1-0}Entered2.2143630.13670.04374

47 Logistic Regression Summary Introduction to the Logistic Regression Model Interpretation of the Parameter Estimates β – Odds Ratio Variable Significance – Likelihood Ratio Test Model Selection Forward Backward Stepwise

48 Logistic Regression Questions/Comments

49 Poisson Regression Consider a count response variable. Response variable is the number of occurrences in a given time frame. Outcomes equal to 0, 1, 2, …. Examples: Number of penalties during a football game. Number of customers shop at a store on a given day. Number of car accidents at an intersection.

50 Poisson Regression Poisson Regression Example Data Set Response Variable –> Number of Days Absent – Integer Predictor Variables Gender- 1 if Female, 2 if Male Ethnicity – 6 Ethnic Categories School – 1 if School, 2 if School 2 Math Test Score – Continuous Language Test Score – Continuous Bilingual Status – 6 Bilingual Categories

51 Poisson Regression First 10 Observations from the Poisson Regression Example Data Set GENDER ethnicity school.1.or.2 ctbs.math.nce ctbs.lang.nce bilingual.status number.days.absent 12 4 1 56.988830 42.45086 2 4 22 4 1 37.094160 46.82059 2 4 31 4 1 32.275460 43.56657 2 2 41 4 1 29.056720 43.56657 2 3 5 1 4 1 6.748048 27.24847 3 3 61 4 1 61.654280 48.41482 0 13 71 4 1 56.988830 40.73543 2 11 82 4 1 10.390490 15.35938 2 7 9 2 4 1 50.527950 52.11514 2 10 10 2 6 1 49.472050 42.45086 0 9

52 Poisson Regression Consider the model where Y i = response for observation i x i = 1x(p+1) matrix of covariates for observation i p =number of covariates μ i = expected number of events given x i GLM with poisson random component and identity link g(μ) = μ Issue: Predicted values range from -∞ to +∞

53 Poisson Regression Consider the Poisson log-linear model GLM with poisson random component and log link g(μ) = log(μ) Predicted response values fall between 0 and +∞ In the case of a single predictor, An increase of one unit of x results an increase of exp(β) in μ

54 Poisson Regression Consider the Poisson log-linear model And the Poisson linear model Then a graph of the predicted values from the model:

55 Poisson Regression

56 Single Continuous Predictor Variable – Math Score > fitline<-glm(number.days.absent~ctbs.math.nce,data=poisson_data,family=poisson(link=log)) > summary(fitline) Call: glm(formula = number.days.absent ~ ctbs.math.nce, family = poisson(link = log), data = poisson_data) Deviance Residuals: Min 1Q Median 3Q Max -4.4451 -2.5583 -1.0842 0.6647 12.4431 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.302100 0.062776 36.671 <2e-16 *** ctbs.math.nce -0.011568 0.001294 -8.939 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

57 Poisson Regression Single Continuous Predictor Variable – Math Score (Dispersion parameter for poisson family taken to be 1) Null deviance: 2409.8 on 315 degrees of freedom Residual deviance: 2330.6 on 314 degrees of freedom AIC: 3196 Number of Fisher Scoring iterations: 6 Interpretation of the parameter estimate: Exp{-0.011568} =.98 = multiplicative effect on the expected number of days absent for an increase of 1 in the Math Score Fabricated Example – If a student is expected to miss 5 days with a math of 50, then another student with a math score of 51 is expected to miss 5*.98 = 4.9 days

58 Poisson Regression Single Continuous Predictor Variable – Gender > fitline<-glm(number.days.absent~factor(GENDER),data=poisson_data,family=poisson(link=log)) > summary(fitline) Call: glm(formula = number.days.absent ~ factor(GENDER), family = poisson(link = log), data = poisson_data) Deviance Residuals: Min 1Q Median 3Q Max -3.660 -2.755 -1.128 0.902 9.738 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.90174 0.03036 62.644 < 2e-16 *** factor(GENDER)2 -0.31729 0.04747 -6.684 2.32e-11 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

59 Poisson Regression Single Continuous Predictor Variable – Gender (Dispersion parameter for poisson family taken to be 1) Null deviance: 2409.8 on 315 degrees of freedom Residual deviance: 2364.5 on 314 degrees of freedom AIC: 3229.9 Number of Fisher Scoring iterations: 5 Important Note: The function factor(categorical variable) uses the dummy coding Interpretation of the parameter estimate: Exp{-0.31729} = 0.7289 = multiplicative effect on the expected number of days absent of being male rather than female If a female student is expected to miss X days, then a male student is expected to miss 0.7289*X.

60 Poisson Regression Variable Selection – Likelihood Ratio Test Model with all variables > fitline<-glm(number.days.absent~factor(GENDER)+factor(school.1.or.2)+ctbs.math.nce+ctbs.lang.nce+factor(bilingual.status)+ factor(ethnicity),data=poisson_data,family=poisson(link=log))  summary(fitline)  Call: glm(formula = number.days.absent ~ factor(GENDER) + factor(school.1.or.2) + ctbs.math.nce + ctbs.lang.nce + factor(bilingual.status) + factor(ethnicity), family = poisson(link = log), data = poisson_data) Deviance Residuals: Min 1Q Median 3Q Max -4.5222 -2.1863 -0.9622 0.7454 10.4077

61 Poisson Regression Variable Selection – Likelihood Ratio Test Model with all variables Cont > Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.972325 0.424645 7.000 2.57e-12 *** factor(GENDER)2 -0.401980 0.048954 -8.211 < 2e-16 *** factor(school.1.or.2)2 -0.582321 0.070717 -8.235 < 2e-16 *** ctbs.math.nce -0.001043 0.001845 -0.565 0.57181 ctbs.lang.nce -0.003048 0.002003 -1.521 0.12822 factor(bilingual.status)1 -0.344696 0.083754 -4.116 3.86e-05 *** factor(bilingual.status)2 -0.282194 0.070846 -3.983 6.80e-05 *** factor(bilingual.status)3 -0.053406 0.081850 -0.652 0.51409 factor(ethnicity)2 -0.131202 0.420704 -0.312 0.75515 factor(ethnicity)3 -0.434061 0.418013 -1.038 0.29909 factor(ethnicity)4 -0.326230 0.419158 -0.778 0.43639 factor(ethnicity)5 -0.876270 0.416398 -2.104 0.03534 * factor(ethnicity)6 -1.188835 0.457470 -2.599 0.00936 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

62 Poisson Regression Variable Selection – Likelihood Ratio Test Model with all variables Cont (Dispersion parameter for poisson family taken to be 1) Null deviance: 2409.8 on 315 degrees of freedom Residual deviance: 1909.2 on 303 degrees of freedom AIC: 2796.6 Number of Fisher Scoring iterations: 6

63 Poisson Regression Variable Selection – Likelihood Ratio Test Model with all variables except Ethnicity >fitline<glm(number.days.absent~factor(GENDER)+factor(school.1.or.2)+ctbs.math.nce+ctbs.lang.nce+factor(bilingual.status), data=poisson_data,family=poisson(link=log)) > summary(fitline) Call: glm(formula = number.days.absent ~ factor(GENDER) + factor(school.1.or.2) + ctbs.math.nce + ctbs.lang.nce + factor(bilingual.status), family = poisson(link = log), data = poisson_data) Deviance Residuals: Min 1Q Median 3Q Max -4.6955 -2.3130 -0.9115 0.7527 11.4247

64 Poisson Regression Variable Selection – Likelihood Ratio Test Model with all variables except Ethnicity Coefficients: EstimateStd. Error z value Pr(>|z|) (Intercept) 2.5741133 0.0838754 30.690 < 2e-16 *** factor(GENDER)2 -0.4212841 0.0484383 -8.697 < 2e-16 *** factor(school.1.or.2)2 -0.8242109 0.0570241 -14.454 < 2e-16 *** ctbs.math.nce 0.0008193 0.0018278 0.448 0.65398 ctbs.lang.nce -0.0050753 0.0019380 -2.619 0.00882 ** factor(bilingual.status)1 -0.3080131 0.0762534 -4.039 5.36e-05 *** factor(bilingual.status)2 -0.1815997 0.0581877 -3.121 0.00180 ** factor(bilingual.status)3 0.0363656 0.0686396 0.530 0.59625 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

65 Poisson Regression Variable Selection – Likelihood Ratio Test Model with all variables except Ethnicity (Dispersion parameter for poisson family taken to be 1) Null deviance: 2409.8 on 315 degrees of freedom Residual deviance: 1984.1 on 308 degrees of freedom AIC: 2861.5 Number of Fisher Scoring iterations: 6

66 Poisson Regression Variable Selection – Likelihood Ratio Test Model 1 with All Variables – Deviance = -2 Log L = 1909.2 with df = 303 Model 2 without Ethnicity - Deviance = -2 Log L = 1984.1 with df = 308 Likelihood Ratio Test = Deviance (Model 2) – Deviance (Model 1) = 1984.1 – 1909.2= 74.9 Likelihood Ratio Test ~ Chi Square with 308-303 = 5 degrees of freedom P-Value <.0001 There is significant evidence to conclude that ethnicity is a significant predictor variable.

67 Poisson Regression Model Selection Forward Selection > fitline<-glm(number.days.absent~1,data=data1,family=poisson(link=log)) > step(fitline,scope = list(upper = ~factor(GENDER)+factor(school.1.or.2)+ctbs.math.nce+ctbs.lang.nce+factor(bilingual.status)+factor(ethnicity), lower = ~1),direction="forward") Start: AIC=3273.22 number.days.absent ~ 1 Df Deviance AIC + factor(school.1.or.2) 1 2103.7 2969.1 + factor(ethnicity) 5 2095.9 2969.3 + ctbs.lang.nce 1 2311.7 3177.0 + ctbs.math.nce 1 2330.6 3196.0 + factor(bilingual.status) 3 2339.2 3208.6 + factor(GENDER) 1 2364.5 3229.9 2409.8 3273.2

68 Poisson Regression Model Selection Forward Selection cont. Step: AIC=2969.12 number.days.absent ~ factor(school.1.or.2) Df Deviance AIC + factor(ethnicity) 5 2018.7 2894.1 + factor(GENDER) 1 2029.3 2896.7 + factor(bilingual.status) 3 2066.0 2937.4 + ctbs.lang.nce 1 2092.7 2960.1 + ctbs.math.nce 1 2096.7 2964.1 2103.7 2969.1 -

69 Poisson Regression Model Selection Forward Selection cont. Step: AIC=2894.07 number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) Df Deviance AIC + factor(GENDER) 1 1951.3 2828.7 + factor(bilingual.status) 3 1981.6 2863.0 + ctbs.math.nce 1 2011.1 2888.5 + ctbs.lang.nce 1 2012.5 2889.9 2018.7 2894.1 Step: AIC=2828.67 number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) + factor(GENDER) Df Deviance AIC + factor(bilingual.status) 3 1915.3 2798.8 + ctbs.lang.nce 1 1938.5 2817.8 + ctbs.math.nce 1 1942.3 2821.7 1951.3 2828.7

70 Poisson Regression Model Selection Forward Selection cont. Step: AIC=2798.75 number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) + factor(GENDER) + factor(bilingual.status) Df Deviance AIC + ctbs.lang.nce 1 1909.5 2794.9 + ctbs.math.nce 1 1911.5 2796.9 1915.3 2798.8 Step: AIC=2794.89 number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) + factor(GENDER) + factor(bilingual.status) + ctbs.lang.nce Df Deviance AIC 1909.5 2794.9 + ctbs.math.nce 1 1909.2 2796.6

71 Poisson Regression Model Selection Forward Selection cont. Call: glm(formula = number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) + factor(GENDER) + factor(bilingual.status) + ctbs.lang.nce, family = poisson(link = log), data = data1) Coefficients: (Intercept) factor(school.1.or.2)2 factor(ethnicity)2 factor(ethnicity)3 factor(ethnicity)4 2.948689 -0.586678 -0.126806 -0.423376 -0.313360 factor(ethnicity)5 factor(ethnicity)6 factor(GENDER)2 factor(bilingual.status)1 factor(bilingual.status)2 -0.862743 -1.175574 -0.404215 -0.343907 -0.284027 factor(bilingual.status)3 ctbs.lang.nce -0.051558 -0.003763 Degrees of Freedom: 315 Total (i.e. Null); 304 Residual Null Deviance: 2410

72 Poisson Regression Model Selection cont. Backward Selection > fitline<-glm(number.days.absent~factor(GENDER)+factor(school.1.or.2)+ctbs.math.nce+ctbs.lang.nce+factor(bilingual.status)+ factor(ethnicity),data=poisson_data,family=poisson(link=log)) > backwards<-step(fitline,direction="backward") Start: AIC=2796.57 number.days.absent ~ factor(GENDER) + factor(school.1.or.2) + ctbs.math.nce + ctbs.lang.nce + factor(bilingual.status) + factor(ethnicity) Df Deviance AIC - ctbs.math.nce 1 1909.5 2794.9 1909.2 2796.6 - ctbs.lang.nce 1 1911.5 2796.9 - factor(bilingual.status) 3 1937.8 2819.2 - factor(ethnicity) 5 1984.1 2861.5 - factor(GENDER) 1 1977.8 2863.2 - factor(school.1.or.2) 1 1983.6 2869.0

73 Poisson Regression Model Selection cont. Backward Selection cont. Step: AIC=2794.89 number.days.absent ~ factor(GENDER) + factor(school.1.or.2) + ctbs.lang.nce + factor(bilingual.status) + factor(ethnicity) Df Deviance AIC 1909.5 2794.9 - ctbs.lang.nce 1 1915.3 2798.8 - factor(bilingual.status)3 1938.5 2817.8 - factor(ethnicity) 5 1984.3 2859.7 - factor(GENDER) 1 1979.4 2862.8 - factor(school.1.or.2) 1 1986.5 2869.9

74 Poisson Regression Model Selection cont. Stepwise Selection cont. > fitline<-glm(number.days.absent~1,data=data1,family=poisson(link=log)) > step(fitline,scope = list(upper=~factor(GENDER)+factor(school.1.or.2)+ctbs.math.nce+ctbs.lang.nce+factor(bilingual.status)+factor(ethnicity), lower = ~1),direction="both") Start: AIC=3273.22 number.days.absent ~ 1 Df Deviance AIC + factor(school.1.or.2) 1 2103.7 2969.1 + factor(ethnicity) 5 2095.9 2969.3 + ctbs.lang.nce 1 2311.7 3177.0 + ctbs.math.nce 1 2330.6 3196.0 + factor(bilingual.status) 3 2339.2 3208.6 + factor(GENDER) 1 2364.5 3229.9 2409.8 3273.2

75 Poisson Regression Model Selection cont. Stepwise Selection cont. Step: AIC=2969.12 number.days.absent ~ factor(school.1.or.2) Df Deviance AIC + factor(ethnicity) 5 2018.7 2894.1 + factor(GENDER) 1 2029.3 2896.7 + factor(bilingual.status) 3 2066.0 2937.4 + ctbs.lang.nce 1 2092.7 2960.1 + ctbs.math.nce 1 2096.7 2964.1 2103.7 2969.1 - factor(school.1.or.2) 1 2409.8 3273.2

76 Poisson Regression Model Selection cont. Stepwise Selection cont. Step: AIC=2894.07 number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) Df Deviance AIC + factor(GENDER) 1 1951.3 2828.7 + factor(bilingual.status) 3 1981.6 2863.0 + ctbs.math.nce 1 2011.1 2888.5 + ctbs.lang.nce 1 2012.5 2889.9 2018.7 2894.1 - factor(ethnicity) 5 2103.7 2969.1 - factor(school.1.or.2) 1 2095.9 2969.3

77 Poisson Regression Model Selection cont. Stepwise Selection cont. Step: AIC=2828.67 number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) + factor(GENDER) Df Deviance AIC + factor(bilingual.status) 3 1915.3 2798.8 + ctbs.lang.nce 1 1938.5 2817.8 + ctbs.math.nce 1 1942.3 2821.7 1951.3 2828.7 - factor(GENDER) 1 2018.7 2894.1 - factor(ethnicity) 5 2029.3 2896.7 - factor(school.1.or.2) 1 2050.5 2925.9

78 Poisson Regression Model Selection cont. Stepwise Selection cont. Step: AIC=2798.75 number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) + factor(GENDER) + factor(bilingual.status) Df Deviance AIC + ctbs.lang.nce 1 1909.5 2794.9 + ctbs.math.nce 1 1911.5 2796.9 1915.3 2798.8 - factor(bilingual.status) 3 1951.3 2828.7 - factor(GENDER) 1 1981.6 2863.0 - factor(ethnicity) 5 1993.4 2866.8 - factor(school.1.or.2) 1 2003.4 2884.8

79 Poisson Regression Model Selection cont. Stepwise Selection cont. Step: AIC=2794.89 number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) + factor(GENDER) + factor(bilingual.status) + ctbs.lang.nce Df Deviance AIC 1909.5 2794.9 + ctbs.math.nce 1 1909.2 2796.6 - ctbs.lang.nce 1 1915.3 2798.8 - factor(bilingual.status) 3 1938.5 2817.8 - factor(ethnicity) 5 1984.3 2859.7 - factor(GENDER) 1 1979.4 2862.8 - factor(school.1.or.2) 1 1986.5 2869.9

80 Poisson Regression Model Selection cont. Stepwise Selection cont. Call: glm(formula = number.days.absent ~ factor(school.1.or.2) + factor(ethnicity) + factor(GENDER) + factor(bilingual.status) + ctbs.lang.nce, family = poisson(link = log), data = data1) Coefficients: (Intercept) factor(school.1.or.2)2 factor(ethnicity)2 factor(ethnicity)3 factor(ethnicity)4 2.948689 -0.586678 -0.126806 -0.423376 -0.313360 factor(ethnicity)5 factor(ethnicity)6 factor(GENDER)2 factor(bilingual.status)1 factor(bilingual.status)2 -0.862743 -1.175574 -0.404215 -0.343907 -0.284027 factor(bilingual.status)3 ctbs.lang.nce -0.051558 -0.003763 Degrees of Freedom: 315 Total (i.e. Null); 304 Residual Null Deviance: 2410 Residual Deviance: 1909 AIC: 2795

81 Poisson Regression Lets look back at the Poisson log-linear model Taking the sample mean and sample variance of the response for intervals of Math Scores Math ScoreSample MeanSample Standard Deviation 0-2011.6666666710.64397095 20-406.4533333336.595029523 40-605.2700729937.382913152 60-804.3246753255.434881392 80-1009.66666666714.50861813

82 Poisson Regression Overdispersion for Poisson Regression Models For Y i ~Poisson(λ i ), E [Y i ] = Var [Y i ] = λ i The variance of the response is much larger than the mean. Larger variance known as overdispersion Consequences:Parameter estimates are still consistent Standard errors are inconsistent Remedy: Negative Binomial model

83 Poisson Regression Summary Introduction to the Poisson Regression Model Interpretation of β Variable Significance – Likelihood Ratio Test Model Selection Forward Backward Stepwise Overdispersion

84 Poisson Regression Questions/Comments


Download ppt "November 5, 2008 Logistic and Poisson Regression: Modeling Binary and Count Data LISA Short Course Series Mark Seiss, Dept. of Statistics."

Similar presentations


Ads by Google