Download presentation

Presentation is loading. Please wait.

1
1 Discrete and Categorical Data William N. Evans Department of Economics/MPRC University of Maryland

2
2 Part I Introduction

3
3 Workhorse statistical model in social sciences is the multivariate regression model Ordinary least squares (OLS) y i = β 0 + x 1i β 1 + x 2i β 2 +… x ki β k + ε i y i = x i β + ε i

4
4 Linear model y i = + x i + i and are “population” values – represent the true relationship between x and y Unfortunately – these values are unknown The job of the researcher is to estimate these values Notice that if we differentiate y with respect to x, we obtain dy/dx =

5
5 represents how much y will change for a fixed change in x –Increase in income for more education –Change in crime or bankruptcy when slots are legalized –Increase in test score if you study more

6
6 Put some concreteness on the problem State of Maryland budget problems –Drop in revenues –Expensive k-12 school spending initiatives Short-term solution – raise tax on cigarettes by 34 cents/pack Problem – a tax hike will reduce consumption of taxable product Question for state – as taxes are raised, how much will cigarette consumption fall?

7
7 Simple model: y i = + x i + i Suppose y is a state’s per capita consumption of cigarettes x represents taxes on cigarettes Question – how much will y fall if x is increased by 34 cents/pack? Problem – many reasons why people smoke – cost is but one of them –

8
8 Data –(Y) State per capita cigarette consumption for the years 1980-1997 –(X) tax (State + Federal) in real cents per pack –“Scatter plot” of the data –Negative covariance between variables When x>, more likely that y< When x Goal: pick values of and that “best fit” the data –Define best fit in a moment

9
9 Notation True model y i = + x i + i We observe data points (y i,x i ) The parameters and are unknown The actual error ( i ) is unknown Estimated model (a,b) are estimates for the parameters ( , ) e i is an estimate of i where e i =y i -a-bx i How do you estimate a and b?

10
10 Objective: Minimize sum of squared errors Min i e i 2 = i (y i – a – bx i ) 2 Minimize sum of squared errors (SSE) Treat (+) and (-) errors equally –Over or under predict by “5” is the same magnitude of error –“Quadratic form” –The optimal value for a and b are those that make the 1 st derivative equal zero –Functions reach min or max values when derivatives are zero

11
11

12
12

13
13 The model has a lot of nice features –Statistical properties easy to establish –Optimal estimates easy to obtain –Parameter estimates are easy to interpret –Model maximizes prediction If you minimize SSE you maximize R 2 The model does well as a first order approximation to lots of problems

14
14 Discrete and Qualitative Data The OLS model work well when y is a continuous variable –Income, wages, test scores, weight, GDP Does not has as many nice properties when y is not continuous Example: doctor visits Integer values Low counts for most people Mass of observations at zero

15
15 Downside of forcing non-standard outcomes into OLS world? Can predict outside the allowable range –e.g., negative MD visits Does not describe the data generating process well –e.g., mass of observations at zero Violates many properties of OLS –e.g. heteroskedasticity

16
16 This talk Look at situations when the data generating process does not lend itself well to OLS models Mathematically describe the data generating process Show how we use different optimization procedure to obtain estimates Describe the statistical properties

17
17 Show how to interpret parameters Illustrate how to estimate the models with popular program STATA

18
18 Types of data generating processes we will consider Dichotomous events (yes or no) –1=yes, 0=no –Graduate high school? work? Are obese? Smoke? Ordinal data –Self reported health (fair, poor, good, excel) –Strongly disagree, disagree, agree, strongly agree

19
19 Count data –Doctor visits, lost workdays, fatality counts Duration data –Time to failure, time to death, time to re- employment

20
20 Recommended Textbooks Jeffrey Wooldridge, “Econometric analysis of cross sectional and panel data” –Lots of insight and mathematical/statistical detail –Very good examples William Greene, “Econometric Analysis” –more topics –Somewhat dated examples

21
21 Course web page www.bsos.umd.edu/econ/evans/jpsm.html Contains –These notes –All STATA programs and data sets –A couple of “Introduction to STATA” handouts –Links to some useful web sites

22
22 STATA Resources Discrete Outcomes “Regression Models for Categorical Dependent Variables Using STATA” –J. Scott Long and Jeremy Freese Available for sale from STATA website for $52 (www.stata.com)www.stata.com Post-estimation subroutines that translate results –Do not need to buy the book to use the subroutines

23
23 In STATA command line type net search spost Will give you a list of available programs to download One is Spostado from http://www.indiana.edu/~jslsoc/stataw.indiana.edu/~jslsoc/stata Click on the link and install the files

24
24 Part II A brief introduction to STATA

25
25 STATA Very fast, convenient, well-documented, cheap and flexible statistical package Excellent for cross-section/panel data projects, not as great for time series but getting better Not as easy to manipulate large data sets from flat files as SAS I usually clean data in SAS, estimate models in STATA

26
26 Key characteristic of STATA –All data must be loaded into RAM –Computations are very fast –But, size of the project is limited by available memory Results can be generated two different ways –Command line –Write a program, (*.do) then submit from the command line

27
27 Sample program to get you started cps87_or.do Program gets you to the point where can Load data into memory Construct new variables Get simple statistics Run a basic regression Store the results on a disk

28
28 Data (cps87_do.dta) Random sample of data from 1987 Current Population Survey outgoing rotation group Sample selection –Males –21-64 –Working 30+hours/week 19,906 observations

29
29 Major caveat Hardest thing to learn/do: get data from some other source and get it into STATA data set We skip over that part All the data sets are loaded into a STATA data file that can be called by saying: use data file name

30
30 Housekeeping at the top of the program * this line defines the semicolon as the ; * end of line delimiter; # delimit ; * set memork for 10 meg; set memory 10m; * write results to a log file; * the replace options writes over old; * log files; log using cps87_or.log,replace; * open stata data set; use c:\bill\stata\cps87_or; * list variables and labels in data set; desc;

31
31 ------------------------------------------------------------------------------ > - storage display value variable name type format label variable label ------------------------------------------------------------------------------ > - age float %9.0g age in years race float %9.0g 1=white, non-hisp, 2=place, n.h, 3=hisp educ float %9.0g years of education unionm float %9.0g 1=union member, 2=otherwise smsa float %9.0g 1=live in 19 largest smsa, 2=other smsa, 3=non smsa region float %9.0g 1=east, 2=midwest, 3=south, 4=west earnwke float %9.0g usual weekly earnings ------------------------------------------------------------------------------

32
32 Constructing new variables Use ‘gen’ command for generate new variables Syntax –gen new variable name=math statement Easily construct new variables via –Algebraic operations –Math/trig functions (ln, exp, etc.) –Logical operators (when true, =1, when false, =0)

33
33 From program * generate new variables; * lines 1-2 illustrate basic math functoins; * lines 3-4 line illustrate logical operators; * line 5 illustrate the OR statement; * line 6 illustrates the AND statement; * after you construct new variables, compress the data again; gen age2=age*age; gen earnwkl=ln(earnwke); gen union=unionm==1; gen topcode=earnwke==999; gen nonwhite=((race==2)|(race==3)); gen big_ne=((region==1)&(smsa==1));

34
34 Getting basic statistics desc -- describes variables in the data set sum – gets summary statistics tab – produces frequencies (tables) of discrete variables

35
35 From program * get descriptive statistics; sum; * get detailed descriptics for continuous variables; sum earnwke, detail; * get frequencies of discrete variables; tabulate unionm; tabulate race; * get two-way table of frequencies; tabulate region smsa, row column cell;

36
36 Results from sum Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- age | 19906 37.96619 11.15348 21 64 race | 19906 1.199136.525493 1 3 educ | 19906 13.16126 2.795234 0 18 unionm | 19906 1.769065.4214418 1 2 smsa | 19906 1.908369.7955814 1 3 -------------+--------------------------------------------------------

37
37 Detailed summary usual weekly earnings ------------------------------------------------------------- Percentiles Smallest 1% 128 60 5% 178 60 10% 210 60 Obs 19906 25% 300 63 Sum of Wgt. 19906 50% 449 Mean 488.264 Largest Std. Dev. 236.4713 75% 615 999 90% 865 999 Variance 55918.7 95% 999 999 Skewness.668646 99% 999 999 Kurtosis 2.632356

38
38 Results for tab 1=union | member, | 2=otherwise | Freq. Percent Cum. ------------+----------------------------------- 1 | 4,597 23.09 23.09 2 | 15,309 76.91 100.00 ------------+----------------------------------- Total | 19,906 100.00

39
39 2x2 Table 1=east, | 2=midwest, | 1=live in 19 largest smsa, 3=south, | 2=other smsa, 3=non smsa 4=west | 1 2 3 | Total -----------+---------------------------------+---------- 1 | 2,806 1,349 842 | 4,997 | 56.15 27.00 16.85 | 100.00 | 38.46 18.89 15.39 | 25.10 | 14.10 6.78 4.23 | 25.10 -----------+---------------------------------+---------- 2 | 1,501 1,742 1,592 | 4,835 | 31.04 36.03 32.93 | 100.00 | 20.58 24.40 29.10 | 24.29 | 7.54 8.75 8.00 | 24.29 -----------+---------------------------------+---------- 3 | 1,501 2,542 1,904 | 5,947 | 25.24 42.74 32.02 | 100.00 | 20.58 35.60 34.80 | 29.88 | 7.54 12.77 9.56 | 29.88 -----------+---------------------------------+---------- 4 | 1,487 1,507 1,133 | 4,127 | 36.03 36.52 27.45 | 100.00 | 20.38 21.11 20.71 | 20.73 | 7.47 7.57 5.69 | 20.73 -----------+---------------------------------+---------- Total | 7,295 7,140 5,471 | 19,906 | 36.65 35.87 27.48 | 100.00 | 100.00 100.00 100.00 | 100.00 | 36.65 35.87 27.48 | 100.00

40
40 Running a regression Syntax reg dependent-variable independent-variables Example from program *run simple regression; reg earnwkl age age2 educ nonwhite union;

41
41 Source | SS df MS Number of obs = 19906 -------------+------------------------------ F( 5, 19900) = 1775.70 Model | 1616.39963 5 323.279927 Prob > F = 0.0000 Residual | 3622.93905 19900.182057239 R-squared = 0.3085 -------------+------------------------------ Adj R-squared = 0.3083 Total | 5239.33869 19905.263217216 Root MSE =.42668 ------------------------------------------------------------------------------ earnwkl | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- age |.0679808.0020033 33.93 0.000.0640542.0719075 age2 | -.0006778.0000245 -27.69 0.000 -.0007258 -.0006299 educ |.069219.0011256 61.50 0.000.0670127.0714252 nonwhite | -.1716133.0089118 -19.26 0.000 -.1890812 -.1541453 union |.1301547.0072923 17.85 0.000.1158613.1444481 _cons | 3.630805.0394126 92.12 0.000 3.553553 3.708057 ------------------------------------------------------------------------------

42
42 Analysis of variance R 2 =.3085 –Variables explain 31% of the variation in log weekly earnings F(5,19900) –Tests the hypothesis that all covariates (except constant) are jointly zero

43
43 Interpret results Y = β 0 + β 1 X i + ε i dY/dX = β 1 But in this case Y=ln(W) where W weekly wages dln(W)/dX = (dW/W)/dX = β 1 –Percentage change in wages given a change in x

44
44 For each additional year of education, wages increase by 6.9% Non whites earn 17.2% less than whites Union members earn 13% more than nonunion members

45
45 Part III Some notes about probability distributions

46
46 Continuous Distributions Random variables with infinite number of possible values Examples -- units of measure (time, weight, distance) Many discrete outcomes can be treated as continuous, e.g., SAT scores

47
47 How to describe a continuous random variable The Probability Density Function (PDF) The PDF for a random variable x is defined as f(x), where f(x) $ 0 I f(x)dx = 1 Calculus review: The integral of a function gives the “area under the curve”

48
48

49
49 Cumulative Distribution Function (CDF) Suppose x is a “measure” like distance or time 0 # x # 4 We may be interested in the Pr(x # a) ?

50
50 CDF What if we consider all values?

51
51 Properties of CDF Note that Pr(x # b) + Pr(x>b) =1 Pr(x>b) = 1 – Pr(x # b) Many times, it is easier to work with compliments

52
52 General notation for continuous distributions The PDF is described by lower case such as f(x) The CDF is defined as upper case such as F(a)

53
53 Standard Normal Distribution Most frequently used continuous distribution Symmetric “bell-shaped” distribution As we will show, the normal has useful properties Many variables we observe in the real world look normally distributed. Can translate normal into ‘standard normal’

54
54 Examples of variables that look normally distributed IQ scores SAT scores Heights of females Log income Average gestation (weeks of pregnancy) As we will show in a few weeks – sample means are normally distributed!!!

55
55 Standard Normal Distribution PDF: For - # z #

56
56 Notation (z) is the standard normal PDF evaluated at z [a] = Pr(z a)

57
57

58
58 Standard Normal Notice that: –Normal is symmetric: (a) = (-a) –Normal is “unimodal” –Median=mean –Area under curve=1 –Almost all area is between (-3,3) Evaluations of the CDF are done with –Statistical functions (excel, SAS, etc) –Tables

59
59 Standard Normal CDF Pr(z -0.98) = [-0.98] = 0.1635

60
60

61
61 Pr(z 1.41) = [1.41] = 0.9207

62
62

63
63 Pr(x>1.17) = 1 – Pr(z 1.17) = 1- [1.17] = 1 – 0.8790 = 0.1210

64
64

65
65 Pr(0.1 z 1.9) = Pr(z 1.9) – Pr(z 0.1) = M (1.9) - M (0.1) = 0.9713 - 0.5398 = 0.4315

66
66

67
67

68
68

69
69 Important Properties of Normal Distribution Pr(z A) = [A] Pr(z > A) = 1 - [A] Pr(z - A) = [-A] Pr(z > -A) = 1 - [-A] = [A]

70
70 Section IV Maximum likelihood estimation

71
71 Maximum likelihood estimation Observe n independent outcomes, all drawn from the same distribution (y 1, y 2, y 3 ….y n ) y i is drawn from f(y i ; θ) where θ is an unknown parameter for the PDF f Recall definition of indepedence. If a and b and independent, Prob(a and b) = Pr(a)Pr(B)

72
72 Because all the draws are independent, the probability these particular n values of Y would be drawn at random is called the ‘likelihood function’ and it equals L = Pr(y 1 )Pr(y 2 )…Pr(y n ) L = f(y 1 ; θ)f(y 2 ; θ)…..f(y 3 ; θ)

73
73 MLE: pick a value for θ that best represents the chance these n values of y would have been generated randomly To maximize L, maximize a monotonic function of L Recall ln(abcd)=ln(a)+ln(b)+ln(c)+ln(d)

74
74 Max L = ln(L) = ln[f(y 1 ; θ)] +ln[f(y 2 ; θ)] + ….. ln[f(y n ; θ) = Σ i ln[f(y i ; θ)] Pick θ so that L is maximized d L /dθ = 0

75
75 L θ θ1θ1 θ2θ2

76
76 Example: Poisson Suppose y measures ‘counts’ such as doctor visits. y i is drawn from a Poisson distribution f(y i ;λ) =e -λ λ y i /y i ! For λ>0 E[y i ]= Var[y i ] = λ

77
77 Given n observations, (y 1, y 2, y 3 ….y n ) Pick value of λ that maximizes L Max L = Σ i ln[f(y i ; θ)] = Σ i ln[e -λ λ y i /y i !] = Σ i [– λ + y i ln(λ) – ln(y i !)] = -n λ + ln(λ) Σ i y i – Σ i ln(y i !)

78
78 L = -n λ + ln(λ) Σ i y i – Σ i ln(y i !) d L /dθ = -n + (1/ λ )Σ i y i = 0 Solve for λ λ = Σ i y i /n = = sample mean of y

79
79 In most cases however, cannot find a ‘closed form’ solution for the parameter in ln[f(y i ; θ)] Must ‘search’ over all possible solutions How does the search work? Start with candidate value of θ. Calculate d L /dθ

80
80 If d L /dθ > 0, increasing θ will increase L so we increase θ some If d L /dθ < 0, decreasing θ will increase L so we decrease θ some Keep changing θ until d L /dθ = 0 How far you ‘step’ when you change θ is determined by a number of different factors

81
81 L θθ1θ1 d L/d θ > 0

82
82 L θ θ3θ3 d L/d θ < 0

83
83 Properties of MLE estimates Sometimes call efficient estimation. Can never generate a smaller variance than one obtained by MLE Parameters estimates are distributed as a normal distribution when samples sizes are large Therefore, if we divide the parameter by its standard error, should be normally distributed with a mean zero and variance 1 if the null (=0) is correct

84
84 Section 5 Dichotomous outcomes

85
85 Dichotomous Data Suppose data is discrete but there are only 2 outcomes Examples –Graduate high school or not –Patient dies or not –Working or not –Smoker or not In data, y i =1 if yes, y i =0 if no

86
86 How to model the data generating process? There are only two outcomes Research question: What factors impact whether the event occurs? To answer, will model the probability the outcome occurs Pr(Y i =1) when y i =1 or Pr(Y i =0) = 1- Pr(Y i =1) when y i =0

87
87 Think of the problem from a MLE perspective Likelihood for i’th observation L i = Pr(Y i =1) Yi [1 - Pr(Y i =1)] (1-Yi) When y i =1, only relevant part is Pr(Y i =1) When y i =0, only relevant part is [1 - Pr(Y i =1)]

88
88 L = Σ i ln[L i ] = = Σ i {y i ln[Pr(y i =1)] + (1-y i )ln[Pr(y i =0)] } Notice that up to this point, the model is generic. The log likelihood function will determined by the assumptions concerning how we determine Pr(y i =1)

89
89 Modeling the probability There is some process (biological, social, decision theoretic, etc) that determines the outcome y Some of the variables impacting are observed, some are not Requires that we model how these factors impact the probabilities Model from a ‘latent variable’ perspective

90
90 Consider a women’s decision to work y i * = the person’s net benefit to work Two components of y i * –Characteristics that we can measure Education, age, income of spouse, prices of child care –Some we cannot measure How much you like spending time with your kids how much you like/hate your job

91
91 We aggregate these two components into one equation y i * = β 0 + x 1i β 1 + x 2i β 2 +… x ki β k + ε i = x i β + ε i x i β (measurable characteristics but with uncertain weights) ε i random unmeasured characteristics Decision rule: person will work if y i * > 0 (if net benefits are positive) y i =1 if y i *>0 y i =0 if y i * ≤0

92
92 y i =1 if y i *>0 y i * = x i β + ε i > 0 only if ε i > - x i β y i =0 if y i * ≤0 y i * = x i β + ε i ≤ 0 only if ε i ≤ - x i β

93
93 How to interpret ε? When we look at certain people, we have expectations about whether y should equal 1 or 0 These expectations do not always hold true The error ε represents deviations from what we expect Go back to the work example, suppose x i β is ‘big.’ We observe a woman with: –High wages –Low husband’s income –Low cost of child care

94
94 We would expect this person to work, UNLESS, there is some unmeasured variable that counteracts this For example: –Suppose a mom really likes spending time with her kids, or she hates her job. –The unmeasured benefit of working is then a big negative coefficient ε i

95
95 If we observe them working, there are a certain range of values that ε i must have been in excess of y i =1 if ε i > - x i β If we observe someone not working, then Consider the opposite. Suppose we observe someone NOT working. Then ε i must not have been big or it was a bigger negative number, since y i =0 if ε i ≤ - x i β

96
96 The Probabilities The estimation procedure used is determined by the assumed distribution of ε What is the probability we observe someone with y=1? –Use definition of the CDF –Pr(y i =1) = Pr(y i * >0) = Pr(ε i > - x i β) = 1 – F(-x i β)

97
97 What is the probability we observe someone with y=0? –Use definition of the CDF –Pr(y i =0) = Pr(y i * ≤ 0) = Pr(ε i ≤ - x i β) = F(-x i β) Two standard models: ε is either –normal or –logistic

98
98 Normal (probit) Model ε is distributed as a standard normal –Mean zero –Variance 1 Evaluate probability (y=1) –Pr(y i =1) = Pr(ε i > - x i β) = 1 – Ф(-x i β) –Given symmetry: 1 – Ф(-x i β) = Ф(x i β) Evaluate probability (y=0) –Pr(y i =0) = Pr(ε i ≤ - x i β) = Ф(-x i β) –Given symmetry: Ф(-x i β) = 1 - Ф(x i β)

99
99 Summary –Pr(y i =1) = Ф(x i β) –Pr(y i =0) = 1 -Ф(x i β) Notice that Ф(a) is increasing a. Therefore, is one of the x’s increases the probability of observing y, we would expect the coefficient on that variable to be (+)

100
100 The standard normal assumption (variance=1) is not critical In practice, the variance may be not equal t 1, but given the math of the problem, we cannot identify the variance. It is absorbed into parameter estimates

101
101 Logit CDF: F(a) = exp(a)/(1+exp(a)) –Symmetric, unimodal distribution –Looks a lot like the normal –Incredibly easy to evaluate the CDF and PDF –Mean of zero, variance > 1 (more variance than normal) Evaluate probability (y=1) –Pr(y i =1) = Pr(ε i > - x i β) = 1 – F(-x i β) –Given symmetry: 1 – F(-x i β) = F(x i β) –F(x i β) = exp(x i β)/(1+exp(x i β))

102
102 Evaluate probability (y=0) –Pr(y i =0) = Pr(ε i ≤ - x i β) = F(-x i β) –Given symmetry: F(-x i β) = 1 - F(x i β) –1 - F(x i β) = 1 /(1+exp(x i β)) When ε i is a logistic distribution –Pr(y i =1) = exp(x i β)/(1+exp(x i β)) –Pr(y i =0) = 1/(1+exp(x i β))

103
103 Example: Workplace smoking bans Smoking supplements to 1991 and 1993 National Health Interview Survey Asked all respondents whether they currently smoke Asked workers about workplace tobacco policies Sample: workers Key variables: current smoking and whether they faced by workplace ban

104
104 Data: workplace1.dta Sample program: workplace1.doc Results: workplace1.log

105
105 Description of variables in data. desc; storage display value variable name type format label variable label ------------------------------------------------------------------------ > - smoker byte %9.0g is current smoking worka byte %9.0g has workplace smoking bans age byte %9.0g age in years male byte %9.0g male black byte %9.0g black hispanic byte %9.0g hispanic incomel float %9.0g log income hsgrad byte %9.0g is hs graduate somecol byte %9.0g has some college college float %9.0g -----------------------------------------------------------------------

106
106 Summary statistics sum; Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- smoker | 16258.25163.433963 0 1 worka | 16258.6851396.4644745 0 1 age | 16258 38.54742 11.96189 18 87 male | 16258.3947595.488814 0 1 black | 16258.1119449.3153083 0 1 -------------+-------------------------------------------------------- hispanic | 16258.0607086.2388023 0 1 incomel | 16258 10.42097.7624525 6.214608 11.22524 hsgrad | 16258.3355271.4721889 0 1 somecol | 16258.2685447.4432161 0 1 college | 16258.3293763.4700012 0 1

107
107 Running a probit probit smoker age incomel male black hispanic hsgrad somecol college worka; The first variable after ‘probit’ is the discrete outcome, the rest of the variables are the independent variables Includes a constant as a default

108
108 Running a logit logit smoker age incomel male black hispanic hsgrad somecol college worka; Same as probit, just change the first word

109
109 Running linear probability reg smoker age incomel male black hispanic hsgrad somecol college worka, robust; Simple regression. Standard errors are incorrect (heteroskedasticity) robust option produces standard errors with arbitrary form of heteroskedasticity

110
110 Probit Results Probit estimates Number of obs = 16258 LR chi2(9) = 819.44 Prob > chi2 = 0.0000 Log likelihood = -8761.7208 Pseudo R2 = 0.0447 ------------------------------------------------------------------------------ smoker | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- age | -.0012684.0009316 -1.36 0.173 -.0030943.0005574 incomel | -.092812.0151496 -6.13 0.000 -.1225047 -.0631193 male |.0533213.0229297 2.33 0.020.0083799.0982627 black | -.1060518.034918 -3.04 0.002 -.17449 -.0376137 hispanic | -.2281468.0475128 -4.80 0.000 -.3212701 -.1350235 hsgrad | -.1748765.0436392 -4.01 0.000 -.2604078 -.0893453 somecol | -.363869.0451757 -8.05 0.000 -.4524118 -.2753262 college | -.7689528.0466418 -16.49 0.000 -.860369 -.6775366 worka | -.2093287.0231425 -9.05 0.000 -.2546873 -.1639702 _cons |.870543.154056 5.65 0.000.5685989 1.172487 ------------------------------------------------------------------------------

111
111 How to measure fit? Regression (OLS) –minimize sum of squared errors –Or, maximize R 2 –The model is designed to maximize predictive capacity Not the case with Probit/Logit –MLE models pick distribution parameters so as best describe the data generating process –May or may not ‘predict’ the outcome well

112
112 Pseudo R 2 LL k log likelihood with all variables LL 1 log likelihood with only a constant 0 > LL k > LL 1 so | LL k | < |LL 1 | Pseudo R 2 = 1 - |LL 1 /LL k | Bounded between 0-1 Not anything like an R 2 from a regression

113
113 Predicting Y Let b be the estimated value of β For any candidate vector of x i, we can predict probabilities, P i P i = Ф(x i b) Once you have P i, pick a threshold value, T, so that you predict Y p = 1 if P i > T Y p = 0 if P i ≤ T Then compare, fraction correctly predicted

114
114 Question: what value to pick for T? Can pick.5 –Intuitive. More likely to engage in the activity than to not engage in it –However, when the is small, this criteria does a poor job of predicting Y i =1 –However, when the is close to 1, this criteria does a poor job of picking Y i =0

115
115 *predict probability of smoking; predict pred_prob_smoke; * get detailed descriptive data about predicted prob; sum pred_prob, detail; * predict binary outcome with 50% cutoff; gen pred_smoke1=pred_prob_smoke>=.5; label variable pred_smoke1 "predicted smoking, 50% cutoff"; * compare actual values; tab smoker pred_smoke1, row col cell;

116
116. sum pred_prob, detail; Pr(smoker) ------------------------------------------------------------- Percentiles Smallest 1%.0959301.0615221 5%.1155022.0622963 10%.1237434.0633929 Obs 16258 25%.1620851.0733495 Sum of Wgt. 16258 50%.2569962 Mean.2516653 Largest Std. Dev..0960007 75%.3187975.5619798 90%.3795704.5655878 Variance.0092161 95%.4039573.5684112 Skewness.1520254 99%.4672697.6203823 Kurtosis 2.149247

117
117 Notice two things –Sample mean of the predicted probabilities is close to the sample mean outcome –99% of the probabilities are less than.5 –Should predict few smokers if use a 50% cutoff

118
118 | predicted smoking, is current | 50% cutoff smoking | 0 1 | Total -----------+----------------------+---------- 0 | 12,153 14 | 12,167 | 99.88 0.12 | 100.00 | 74.93 35.90 | 74.84 | 74.75 0.09 | 74.84 -----------+----------------------+---------- 1 | 4,066 25 | 4,091 | 99.39 0.61 | 100.00 | 25.07 64.10 | 25.16 | 25.01 0.15 | 25.16 -----------+----------------------+---------- Total | 16,219 39 | 16,258 | 99.76 0.24 | 100.00 | 100.00 100.00 | 100.00 | 99.76 0.24 | 100.00

119
119 Check on-diagonal elements. The last number in each 2x2 element is the fraction in the cell The model correctly predicts 74.75 + 0.15 = 74.90% of the obs It only predicts a small fraction of smokers

120
120 Do not be amazed by the 75% percent correct prediction If you said everyone has a chance of smoking (a case of no covariates), you would be correct Max[( ,(1- )] percent of the time

121
121 In this case, 25.16% smoke. If everyone had the same chance of smoking, we would assign everyone Pr(y=1) =.2516 We would be correct for the 1 -.2516 = 0.7484 people who do not smoke

122
122 Key points about prediction MLE models are not designed to maximize prediction Should not be surprised they do not predict well In this case, not particularly good measures of predictive capacity

123
123 Translating coefficients in probit: Continuous Covariates Pr(y i =1) = Φ[β 0 + x 1i β 1 + x 2i β 2 +… x ki β k ] Suppose that x 1i is a continuous variable d Pr(y i =1) /d x 1i = ? What is the change in the probability of an event give a change in x 1i?

124
124 Marginal Effect d Pr(y i =1) /d x 1i = β 1 φ[β 0 + x 1i β 1 + x 2i β 2 +… x ki β k ] Notice two things. Marginal effect is a function of the other parameters and the values of x.

125
125 Translating Coefficients: Discrete Covariates Pr(y i =1) = Φ[β 0 + x 1i β 1 + x 2i β 2 +… x ki β k ] Suppose that x 2i is a dummy variable (1 if yes, 0 if no) Marginal effect makes no sense, cannot change x 2i by a little amount. It is either 1 or 0. Redefine the variable of interest. Compare outcomes with and without x 2i

126
126 y 1 = Pr(y i =1 | x 2i =1) = Φ[β 0 + x 1i β 1 + β 2 + x 3i β 3 +… ] y 0 = Pr(y i =1 | x 2i =0) = Φ[β 0 + x 1i β 1 + x 3i β 3 … ] Marginal effect = y 1 – y 0. Difference in probabilities with and without x 2i?

127
127 In STATA Marginal effects for continuous variables, and Change in probabilities for dichotomous outcomes, STATA picks sample means for X’s

128
128 STATA command for Marginal Effects mfx compute; Must come after the outcome when estimates are still active in program.

129
129 Marginal effects after probit y = Pr(smoker) (predict) =.24093439 ------------------------------------------------------------------------------ variable | dy/dx Std. Err. z P>|z| [ 95% C.I. ] X ---------+-------------------------------------------------------------------- age | -.0003951.00029 -1.36 0.173 -.000964.000174 38.5474 incomel | -.0289139.00472 -6.13 0.000 -.03816 -.019668 10.421 male*|.0166757.0072 2.32 0.021.002568.030783.39476 black*| -.0320621.01023 -3.13 0.002 -.052111 -.012013.111945 hispanic*| -.0658551.01259 -5.23 0.000 -.090536 -.041174.060709 hsgrad*| -.053335.01302 -4.10 0.000 -.07885 -.02782.335527 somecol*| -.1062358.01228 -8.65 0.000 -.130308 -.082164.268545 college*| -.2149199.01146 -18.76 0.000 -.237378 -.192462.329376 worka*| -.0668959.00756 -8.84 0.000 -.08172 -.052072.68514 ------------------------------------------------------------------------------ (*) dy/dx is for discrete change of dummy variable from 0 to 1

130
130 Interpret results 10% increase in income will reduce smoking by 2.9 percentage points 10 year increase in age will decrease smoking rates.4 percentage points Those with a college degree are 21.5 percentage points less likely to smoke Those that face a workplace smoking ban have 6.7 percentage point lower probability of smoking

131
131 Do not confuse percentage point and percent differences –A 6.7 percentage point drop is 29% of the sample mean of 24 percent. –Blacks have smoking rates that are 3.2 percentage points lower than others, which is 13 percent of the sample mean

132
132 Comparing Marginal Effects VariableLPProbitLogit age-0.00040-0.00048 incomel-0.0289-0.0287-0.0276 male 0.0167 0.0168 0.0172 Black-0.0321-0.0357-0.0342 hispanic-0.0658-0.0706-0.0602 hsgrad-0.0533-0.0661-0.0514 college-0.2149-0.2406-0.2121 worka-0.0669-0.0661-0.0658

133
133 Marginal effects for specific characteristics Can generate marginal effects for a specific x prchange, x(age=40 black=0 hispanic=0 hsgrad=0 somecol=0 worka=0); If an x is not specified, STATA will use the sample mean (e.g., log income in this case) Make sure when you specify a particular dummy variable (=1) you set the rest to zero

134
134 probit: Changes in Predicted Probabilities for smoker min->max 0->1 -+1/2 -+sd/2 MargEfct age -0.0323 -0.0005 -0.0005 -0.0056 -0.0005 incomel -0.1795 -0.0320 -0.0344 -0.0263 -0.0345 male 0.0198 0.0198 0.0198 0.0097 0.0198 black -0.0385 -0.0385 -0.0394 -0.0124 -0.0394 hispanic -0.0804 -0.0804 -0.0845 -0.0202 -0.0847 hsgrad -0.0625 -0.0625 -0.0648 -0.0306 -0.0649 somecol -0.1235 -0.1235 -0.1344 -0.0598 -0.1351 college -0.2644 -0.2644 -0.2795 -0.1335 -0.2854 worka -0.0742 -0.0742 -0.0776 -0.0361 -0.0777

135
135 Testing significance of individual parameters In large samples, MLE estimates are normally distributed Null hypothesis, β j =0 If the null is true and the sample is larges, β j is distributed as a normal with variance σ j 2. Using notes from before, if we divide β j by the standard deviation, we get standard normal

136
136 β j /se(β j ) should be N(0,1) β j /se(β j ) = z-score 95% of the distribution of a N(0,1) is between -1.96, 1.96 Reject null of the z-score > |1.96| Only age is statistically insignificant (cannot reject null)

137
137 When will results differ? Normal and logit CDF look: –Similar in the mid point of the distribution –Different in the tails You obtain more observations in the tails of the distribution when –Samples sizes are large – approaches 1 or 0 These situations will produce more differences in estimates

138
138 Some nice properties of the Logit Outcome, y=1 or 0 Treatment, x=1 or 0 Other covariates, x Context, –x = whether a baby is born with a low weight birth –x = whether the mom smoked or not during pregnancy

139
139 Risk ratio RR = Prob(y=1|x=1)/Prob(y=1|x=0) Differences in the probability of an event when x is and is not observed How much does smoking elevate the chance your child will be a low weight birth

140
140 Let Y yx be the probability y=1 or 0 given x=1 or 0 Think of the risk ratio the following way Y 11 is the probability Y=1 when X=1 Y 10 is the probability Y=1 when X=0 Y 11 = RR*Y 10

141
141 Odds Ratio OR=A/B = [Y 11 /Y 01 ]/[Y 10 /Y 00 ] A = [Pr(Y=1|X=1)/Pr(Y=0|X=1)] = odds of Y occurring if you are a smoker B = [Pr(Y=1|X=0)/Pr(Y=0|X=0)] = odds of y happening if you are not a smoker What are the relative odds of Y happening if you do or do not experience X

142
142 Suppose Pr(Y i =1) = F(β o + β 1 X i + β 2 Z) and F is the logistic function Can show that OR = exp(β 1 ) = e β1 This number is typically reported by most statistical packages

143
143 Details Y 11 = exp(β o + β 1 + β 2 Z) /(1+ exp(β o + β 1 + β 2 Z) ) Y 10 = exp(β o + β 2 Z)/(1+ exp(β o +β 2 Z)) Y 01 = 1 /(1+ exp(β o + β 1 + β 2 Z) ) Y 00 = 1/(1+ exp(β o +β 2 Z) [Y 11 /Y 01 ] = exp(β o + β 1 + β 2 Z) [Y 10 /Y 00 ] = exp(β o + β 2 Z) OR=A/B = [Y 11 /Y 01 ]/[Y 10 /Y 00 ] = exp(β o + β 1 + β 2 Z)/ exp(β o + β 2 Z) = exp(β 1 )

144
144 Suppose Y is rare, close to 0 –Pr(Y=0|X=1) and Pr(Y=0|X=0) are both close to 1, so they cancel Therefore, when is close to 0 –Odds Ratio = Risk Ratio Why is this nice?

145
145 Population attributable risk Average outcome in the population = (1- ) Y 10 + Y 11 = (1- )Y 10 + (RR)Y 10 Average outcomes are a weighted average of outcomes for X=0 and X=1 What would the average outcome be in the absence of X (e.g., reduce smoking rates to 0) Y a = Y 10

146
146 Population Attributable Risk PAR Fraction of outcome attributed to X The difference between the current rate and the rate that would exist without X, divided by the current rate PAR = ( – Y a )/ = (RR – 1) /[(1- ) + RR ]

147
147 Example: Maternal Smoking and Low Weight Births 6% births are low weight –< 2500 grams ( –Average birth is 3300 grams (5.5 lbs) Maternal smoking during pregnancy has been identified as a key cofactor –13% of mothers smoke –This number was falling about 1 percentage point per year during 1980s/90s –Doubles chance of low weight birth

148
148 Natality detail data Census of all births (4 million/year) Annual files starting in the 60s Information about –Baby (birth weight, length, date, sex, plurality, birth injuries) –Demographics (age, race, marital, educ of mom) –Birth (who delivered, method of delivery) –Health of mom (smoke/drank during preg, weight gain)

149
149 Smoking not available from CA or NY ~3 million usable observations I pulled.5% random sample from 1995 About 12,500 obs Variables: birthweight (grams), smoked, married, 4-level race, 5 level education, mothers age at birth

150
150 ------------------------------------------------------------------------------ > - storage display value variable name type format label variable label ------------------------------------------------------------------------------ > - birthw int %9.0g birth weight in grams smoked byte %9.0g =1 if mom smoked during pregnancy age byte %9.0g moms age at birth married byte %9.0g =1 if married race4 byte %9.0g 1=white,2=black,3=asian,4=other educ5 byte %9.0g 1=0-8, 2=9-11, 3=12, 4=13-15, 5=16+ visits byte %9.0g prenatal visits ------------------------------------------------------------------------------

151
151 dummy | variable, | =1 | =1 if mom smoked ifBW<2500 | during pregnancy grams | 0 1 | Total -----------+----------------------+---------- 0 | 11,626 1,745 | 13,371 | 86.95 13.05 | 100.00 | 94.64 89.72 | 93.96 | 81.70 12.26 | 93.96 -----------+----------------------+---------- 1 | 659 200 | 859 | 76.72 23.28 | 100.00 | 5.36 10.28 | 6.04 | 4.63 1.41 | 6.04 -----------+----------------------+---------- Total | 12,285 1,945 | 14,230 | 86.33 13.67 | 100.00 | 100.00 100.00 | 100.00 | 86.33 13.67 | 100.00

152
152 Notice a few things –13.7% of women smoke –6% have low weight birth Pr(LBW | Smoke) =10.28% Pr(LBW |~ Smoke) = 5.36% RR = Pr(LBW | Smoke)/ Pr(LBW |~ Smoke) = 0.1028/0.0536 = 1.92

153
153 Logit results Log likelihood = -3136.9912 Pseudo R2 = 0.0330 ------------------------------------------------------------------------------ lowbw | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- smoked |.6740651.0897869 7.51 0.000.4980861.8500441 age |.0080537.006791 1.19 0.236 -.0052564.0213638 married | -.3954044.0882471 -4.48 0.000 -.5683654 -.2224433 _Ieduc5_2 | -.1949335.1626502 -1.20 0.231 -.5137221.1238551 _Ieduc5_3 | -.1925099.1543239 -1.25 0.212 -.4949791.1099594 _Ieduc5_4 | -.4057382.1676759 -2.42 0.016 -.7343769 -.0770994 _Ieduc5_5 | -.3569715.1780322 -2.01 0.045 -.7059081 -.0080349 _Irace4_2 |.7072894.0875125 8.08 0.000.5357681.8788107 _Irace4_3 |.386623.307062 1.26 0.208 -.2152075.9884535 _Irace4_4 |.3095536.2047899 1.51 0.131 -.0918271.7109344 _cons | -2.755971.2104916 -13.09 0.000 -3.168527 -2.343415 ------------------------------------------------------------------------------

154
154 Odds Ratios Smoked –exp(0.674) = 1.96 –Smokers are twice as likely to have a low weight birth _Irace4_2 (Blacks) –exp(0.707) = 2.02 –Blacks are twice as likely to have a low weight birth

155
155 Asking for odds ratios Logistic y x1 x2; In this case xi: logistic lowbw smoked age married i.educ5 i.race4;

156
156 Log likelihood = -3136.9912 Pseudo R2 = 0.0330 ------------------------------------------------------------------------------ lowbw | Odds Ratio Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- smoked | 1.962198.1761796 7.51 0.000 1.645569 2.33975 age | 1.008086.0068459 1.19 0.236.9947574 1.021594 married |.6734077.0594262 -4.48 0.000.5664506.8005604 _Ieduc5_2 |.8228894.1338431 -1.20 0.231.5982646 1.131852 _Ieduc5_3 |.8248862.1272996 -1.25 0.212.6095837 1.116233 _Ieduc5_4 |.6664847.1117534 -2.42 0.016.4798043.9257979 _Ieduc5_5 |.6997924.1245856 -2.01 0.045.4936601.9919973 _Irace4_2 | 2.028485.1775178 8.08 0.000 1.70876 2.408034 _Irace4_3 | 1.472001.4519957 1.26 0.208.8063741 2.687076 _Irace4_4 | 1.362817.2790911 1.51 0.131.9122628 2.035893 ------------------------------------------------------------------------------

157
157 PAR PAR = (RR – 1) /[(1- ) + RR ] = 0.137 RR = 1.96 PAR = 0.116 11.6% of low weight births attributed to maternal smoking

158
158 Hypothesis Testing in MLE models MLE are asymptotically normally distributed, one of the properties of MLE Therefore, standard t-tests of hypothesis will work as long as samples are ‘large’ What ‘large’ means is open to question What to do when samples are ‘small’ – table for a moment

159
159 Testing a linear combination of parameters Suppose you have a probit model Φ[β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 +… ] Test a linear combination or parameters Simplest example, test a subset are zero β 1 = β 2 = β 3 = β 4 =0 To fix the discussion N observations K parameters J restrictions (count the equals signs, j=4)

160
160 Wald Test Based on the fact that the parameters are distributed asymptotically normal Probability theory review –Suppose you have m draws from a standard normal distribution (z i ) –M = z 1 2 + z 2 2 + …. z m 2 –M is distributed as a Chi-square with m degrees of freedom

161
161 Wald test constructs a ‘quadratic form’ suggested by the test you want to perform This combination, because it contains squares of the true parameters, should, if the hypothesis is true, be distributed as a Chi square with J degrees of freedom. If the test statistic is ‘large’, relative to the degrees of freedom of the test, we reject, because there is a low probability we would have drawn that value at random from the distribution

162
162 Reading critical values from a table All stats books will report the ‘percentiles’ of a chi-square –Vertical axis (degrees of freedom) –Horizontal axis (percentiles) –Entry is the value where ‘percentile’ of the distribution falls below

163
163 Example: Suppose 4 restrictions 95% of a chi-square distribution falls below 9.488. So there is only a 5% a number drawn at random will exceed 9.488 If your test statistic is below, cannot reject null If your test statistics is above, reject null

164
164 Chi-square

165
165 Wald test in STATA Default test in MLE models Easy to do. Look at program test hsgrad somecol college Does not estimate the ‘restricted’ model ‘Lower power’ than other tests, i.e., high chance of false negative

166
166. test hsgrad somecol college; ( 1) hsgrad = 0 ( 2) somecol = 0 ( 3) college = 0 chi2( 3) = 504.78 Prob > chi2 = 0.0000

167
167 Notice the higher value of the test statistic. There is a low chance that a variable, drawn at random from a ch- square with three degrees of freedom will be this large. Reject null

168
168 -2 Log likelihood test * how to run the same tests with a -2 log like test; * estimate the unresticted model and save the estimates ; * in urmodel; probit smoker age incomel male black hispanic hsgrad somecol college worka; estimates store urmodel; * estimate the restricted model. save results in rmodel; probit smoker age incomel male black hispanic worka; estimates store rmodel; lrtest urmodel rmodel;

169
169 I prefer -2 log likelihood test –Estimates the restricted and unrestricted model –Therefore, has more power than a Wald test In most cases, they give the same ‘decision’ (reject/not reject)

170
170 Section VI Categorical Data

171
171 Ordered Probit Many discrete outcomes are to questions that have a natural ordering but no quantitative interpretation: Examples: –Self reported health status (excellent, very good, good, fair, poor) –Do you agree with the following statement Strongly agree, agree, disagree, strongly disagree

172
172 Can use the same type of model as in the previous section to analyze these outcomes Another ‘latent variable’ model Key to the model: there is a monotonic ordering of the qualitative responses

173
173 Self reported health status Excellent, very good, good, fair, poor Coded as 1, 2, 3, 4, 5 on National Health Interview Survey We will code as 5,4,3,2,1 (easier to think of this way) Asked on every major health survey Important predictor of health outcomes, e.g. mortality Key question: what predicts health status?

174
174 Important to note – the numbers 1-5 mean nothing in terms of their value, just an ordering to show you the lowest to highest The example below is easily adapted to include categorical variables with any number of outcomes

175
175 Model y i * = latent index of reported health The latent index measures your own scale of health. Once y i * crosses a certain value you report poor, then good, then very good, then excellent health

176
176 y i = (1,2,3,4,5) for (fair, poor, VG, G, excel) Interval decision rule y i =1 if y i * ≤ u 1 y i =2 if u 1 < y i * ≤ u 2 y i =3 if u 2 < y i * ≤ u 3 y i =4 if u 3 < y i * ≤ u 4 y i =5 if y i * > u 4

177
177 As with logit and probit models, we will assume y i * is a function of observed and unobserved variables y i * = β 0 + x 1i β 1 + x 2i β 2 …. x ki β k + ε i y i * = x i β + ε i

178
178 The threshold values (u 1, u 2, u 3, u 4 ) are unknown. We do not know the value of the index necessary to push you from very good to excellent. In theory, the threshold values are different for everyone Computer will not only estimate the β’s, but also the thresholds – average across people

179
179 As with probit and logit, the model will be determined by the assumed distribution of ε In practice, most people pick nornal, generating an ‘ordered probit’ (I have no idea why) We will generate the math for the probit version

180
180 Probabilities Lets do the outliers, Pr(y i =1) and Pr(y i =5) first Pr(y i =1) = Pr(y i * ≤ u 1 ) = Pr(x i β +ε i ≤ u 1 ) =Pr(ε i ≤ u 1 - x i β) = Φ[u 1 - x i β] = 1- Φ[x i β – u 1 ]

181
181 Pr(y i =5) = Pr(y i * > u 4 ) = Pr(x i β +ε i > u 4 ) =Pr(ε i > u 4 - x i β) = 1 - Φ[u 4 - x i β] = Φ[x i β – u 4 ]

182
182 Sample one for y=3 Pr(y i =3) = Pr(u 2 < y i * ≤ u 3 ) = Pr(y i * ≤ u 3 ) – Pr(y i * ≤ u 2 ) = Pr(x i β +ε i ≤ u 3 ) – Pr(x i β +ε i ≤ u 2 ) = Pr(ε i ≤ u 3 - x i β) - Pr(ε i ≤ u 2 - x i β) = Φ[u 3 - x i β] - Φ[u 2 - x i β] = 1 - Φ[x i β - u 3 ] – 1 + Φ[x i β - u 2 ] = Φ[x i β - u 2 ] - Φ[x i β - u 3 ]

183
183 Summary Pr(y i =1) = 1- Φ[x i β – u 1 ] Pr(y i =2) = Φ[x i β – u 1 ] - Φ[x i β – u 2 ] Pr(y i =3) = Φ[x i β – u 2 ] - Φ[x i β – u 3 ] Pr(y i =4) = Φ[x i β – u 3 ] - Φ[x i β – u 4 ] Pr(y i =5) = Φ[x i β – u 4 ]

184
184 Likelihood function There are 5 possible choices for each person Only 1 is observed L = Σ i ln[Pr(y i =k)] for k

185
185 Programming example Cancer control supplement to 1994 National Health Interview Survey Question: what observed characteristics predict self reported health (1-5 scale) 1=poor, 5=excellent Key covariates: income, education, age, current and former smoking status Programs sr_health_status.do,.dta,.log

186
186 desc; male byte %9.0g =1 if male age byte %9.0g age in years educ byte %9.0g years of education smoke byte %9.0g current smoker smoke5 byte %9.0g smoked in past 5 years black float %9.0g =1 if respondent is black othrace float %9.0g =1 if other race (white is ref) sr_health float %9.0g 1-5 self reported health, 5=excel, 1=poor famincl float %9.0g log family income

187
187 tab sr_health; 1-5 self | reported | health, | 5=excel, | 1=poor | Freq. Percent Cum. ------------+----------------------------------- 1 | 342 2.65 2.65 2 | 991 7.68 10.33 3 | 3,068 23.78 34.12 4 | 3,855 29.88 64.00 5 | 4,644 36.00 100.00 ------------+----------------------------------- Total | 12,900 100.00

188
188 In STATA oprobit sr_health male age educ famincl black othrace smoke smoke5;

189
189 Ordered probit estimates Number of obs = 12900 LR chi2(8) = 2379.61 Prob > chi2 = 0.0000 Log likelihood = -16401.987 Pseudo R2 = 0.0676 ------------------------------------------------------------------------------ sr_health | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- male |.1281241.0195747 6.55 0.000.0897583.1664899 age | -.0202308.0008499 -23.80 0.000 -.0218966 -.018565 educ |.0827086.0038547 21.46 0.000.0751535.0902637 famincl |.2398957.0112206 21.38 0.000.2179037.2618878 black | -.221508.029528 -7.50 0.000 -.2793818 -.1636341 othrace | -.2425083.0480047 -5.05 0.000 -.3365958 -.1484208 smoke | -.2086096.0219779 -9.49 0.000 -.2516855 -.1655337 smoke5 | -.1529619.0357995 -4.27 0.000 -.2231277 -.0827961 -------------+---------------------------------------------------------------- _cut1 |.4858634.113179 (Ancillary parameters) _cut2 | 1.269036.11282 _cut3 | 2.247251.1138171 _cut4 | 3.094606.1145781 ------------------------------------------------------------------------------

190
190 Interpret coefficients Marginal effects/changes in probabilities are now a function of 2 things –Point of expansion (x’s) –Frame of reference for outcome (y) STATA –Picks mean values for x’s –You pick the value of y

191
191 Continuous x’s Consider y=5 d Pr(y i =5)/dx i = d Φ[x i β – u 4 ]/dx i = βφ[x i β – u 4 ] Consider y=3 d Pr(y i =3)/dx i = βφ[x i β – u 3 ] - βφ[x i β – u 4 ]

192
192 Discrete X’s x i β = β 0 + x 1i β 1 + x 2i β 2 …. x ki β k –X 2i is yes or no (1 or 0) ΔPr(y i =5) = Φ[β 0 + x 1i β 1 + β 2 + x 3i β 3 +.. x ki β k ] - Φ[β 0 + x 1i β 1 + x 3i β 3 …. x ki β k ] Change in the probabilities when x 2i =1 and x 2i =0

193
193 Ask for marginal effects mfx compute, predict(outcome(5));

194
194 mfx compute, predict(outcome(5)); Marginal effects after oprobit y = Pr(sr_health==5) (predict, outcome(5)) =.34103717 ------------------------------------------------------------------------------ variable | dy/dx Std. Err. z P>|z| [ 95% C.I. ] X ---------+-------------------------------------------------------------------- male*|.0471251.00722 6.53 0.000.03298.06127.438062 age | -.0074214.00031 -23.77 0.000 -.008033 -.00681 39.8412 educ |.0303405.00142 21.42 0.000.027565.033116 13.2402 famincl |.0880025.00412 21.37 0.000.07993.096075 10.2131 black*| -.0781411.00996 -7.84 0.000 -.097665 -.058617.124264 othrace*| -.0843227.01567 -5.38 0.000 -.115043 -.053602.04124 smoke*| -.0749785.00773 -9.71 0.000 -.09012 -.059837.289147 smoke5*| -.0545062.01235 -4.41 0.000 -.078719 -.030294.081395 ------------------------------------------------------------------------------ (*) dy/dx is for discrete change of dummy variable from 0 to 1

195
195 Interpret the results Males are 4.7 percentage points more likely to report excellent Each year of age decreases chance of reporting excellent by 0.7 percentage points Current smokers are 7.5 percentage points less likely to report excellent health

196
196 Minor notes about estimation Wald tests/-2 log likelihood tests are done the exact same was as in PROBIT and LOGIT Tests of individual parameters are done the same way (z-score)

197
197 Use PRCHANGE to calculate marginal effect for a specific person prchange, x(age=40 black=0 othrace=0 smoke=0 smoke5=0 educ=16); –When a variable is NOT specified (famincl), STATA takes the sample mean.

198
198 PRCHANGE will produce results for all outcomes male Avg|Chg| 1 2 3 4 0->1.0203868 -.0020257 -.00886671 -.02677558 -.01329902 5 0->1.05096698

199
199 age Avg|Chg| 1 2 3 4 Min->Max.13358317.0184785.06797072.17686112.07064757 -+1/2.00321942.00032518.00141642.00424452.00206241 -+sd/2.03728014.00382077.01648743.04910323.0237889 MargEfct.00321947.00032515.00141639.00424462.00206252

200
200 Section VII Count Data Models

201
201 Introduction Many outcomes of interest are integer counts –Doctor visits –Low work days –Cigarettes smoked per day –Missed school days OLS models can easily handle some integer models

202
202 Example –SAT scores are essentially integer values –Few at ‘tails’ –Distribution is fairly continuous –OLS models well In contrast, suppose –High fraction of zeros –Small positive values

203
203 OLS models will –Predict negative values –Do a poor job of predicting the mass of observations at zero Example –Dr visits in past year, Medicare patients(65+) –1987 National Medical Expenditure Survey –Top code (for now) at 10 –17% have no visits

204
204 visits | Freq. Percent Cum. ------------+----------------------------------- 0 | 915 17.18 17.18 1 | 601 11.28 28.46 2 | 533 10.01 38.46 3 | 503 9.44 47.91 4 | 450 8.45 56.35 5 | 391 7.34 63.69 6 | 319 5.99 69.68 7 | 258 4.84 74.53 8 | 216 4.05 78.58 9 | 192 3.60 82.19 10 | 949 17.81 100.00 ------------+----------------------------------- Total | 5,327 100.00

205
205 Poisson Model y i is drawn from a Poisson distribution Poisson parameter varies across observations f(y i ;λ i ) =e -λi λ i yi /y i ! For λ i >0 E[y i ]= Var[y i ] = λ i = f(x i, β)

206
206 λ i must be positive at all times Therefore, we CANNOT let λ i = x i β Let λ i = exp(x i β) ln(λ i ) = (x i β)

207
207 d ln(λ i )/dx i = β Remember that d ln(λ i ) = dλ i /λ i Interpret β as the percentage change in mean outcomes for a change in x

208
208 Problems with Poisson Variance grows with the mean –E[y i ]= Var[y i ] = λ i = f(x i, β) Most data sets have over dispersion, where the variance grows faster than the mean In dr. visits sample, = 5.6, s=6.7 Impose Mean=Var, severe restriction and you tend to reduce standard errors

209
209 Negative Binomial Model Where γ i = exp(x i β) and δ ≥ 0 E[y i ] = δγ i = δexp(x i β) Var[y i ] = δ (1+δ) γ i Var[y i ]/ E[y i ] = (1+δ)

210
210 δ must always be ≥ 0 In this case, the variance grows faster than the mean If δ=0, the model collapses into the Poisson Always estimate negative binomial If you cannot reject the null that δ=0, report the Poisson estimates

211
211 Notice that ln(E[y i ]) = ln(δ) + ln(γ i ), so d ln(E[y i ]) /dx i = β Parameters have the same interpretation as in the Poisson model

212
212 In STATA POISSON estimates a MLE model for poisson –Syntax POISSON y independent variables NBREG estimates MLE negative binomial –Syntax NBREG y independent variables

213
213 Interpret results for Poisson Those with CHRONIC condition have 50% more mean MD visits Those in EXCELent health have 78% fewer MD visits BLACKS have 33% fewer visits than whites Income elasticity is 0.021, 10% increase in income generates a 2.1% increase in visits

214
214 Negative Binomial Interpret results the same was as Poisson Look at coefficient/standard error on delta Ho: delta = 0 (Poisson model is correct) In this case, delta = 5.21 standard error is 0.15, easily reject null. Var/Mean = 1+delta = 6.21, Poisson is mis-specificed, should see very small standard errors in the wrong model

215
215 Selected Results, Count Models Parameter (Standard Error) VariablePoissonNegative Binomial Age650.214(0.026)0.103(0.055) Age700.787(0.026)0.204(0.054) Chronic0.500(0.014)0.509(0.029) Excel-0.784(0.031)-0.527(0.059) Ln(Inc).0.021(0.007)0.038(0.016)

216
216 Section VIII Duration Data

217
217 Introduction Sometimes we have data on length of time of a particular event or ‘spells’ –Time until death –Time on unemployment –Time to complete a PhD Techniques we will discuss were originally used to examine lifespan of objects like light bulbs or machines. These models are often referred to as “time to failure”

218
218 Notation T is a random variable that indicates duration (time til death, find a new job, etc) t is the realization of that variable f(t) is a PDF that describes the process that determines the time to failure CDF is F(t) represents the probability an event will happen by time t

219
219 F(t) represents the probability that the event happens by ‘t’. What is the probability a person will die on or before the 65 th birthday?

220
220 Survivor function, what is the chance you live past (t) S(t) = 1 – F(t) If 10% of a cohort dies by their 65 th birthday, 90% will die sometime after their 65 th birthday

221
221 Hazard function, h(t) What is the probability the spell will end at time t, given that it has already lasted t What is the chance you find a new job in month 12 given that you’ve been unemployed for 12 months already

222
222 PDF, CDF (Failure function), survivor function and hazard function are all related λ(t) = f(t)/S(t) = f(t)/(1-F(t)) We focus on the ‘hazard’ rate because its relationship to time indicates ‘duration dependence’

223
223 Example: suppose the longer someone is out of work, the lower the chance they will exit unemployment – ‘damaged goods’ This is an example of duration dependence, the probability of exiting a state of the world is a function of the length

224
224 Mathematically d λ(t) /dt = 0 then there is no duration dep. d λ(t) /dt > 0 there is + duration dependence the probability the spell will end increases with time d λ(t) /dt < 0 there is – duration dependence the probability the spell will end decreases over time

225
225 Your choice, is to pick values for f(t) that have +, - or no duration dependence

226
226 Different Functional Forms Exponential –λ(t)= λ –Hazard is the same over time, a ‘memory less’ process Weibull –F(t) = 1 – exp(-γt ρ ) where ρ,γ > 0 –λ(t) = ρ γ ρ-1 –if ρ >1, increasing hazard –if ρ <1, decreasing hazard –if ρ =1, exponential

227
227 Others: Lognormal, log-logistic, Gompertz. Little more difficult – can examine when you get comfortable with Weibull

228
228 A note about most data sets Most data sets have ‘censored’ spells –Follow people over time –All will eventually die, but some do not in your period of analysis –Incomplete spells or censored data Must build into the log likelihood function

229
229 Let t i be the duration we observe for all people Some people die, and their they lived until period t i Others are observed for t i periods, but do not Let d i =1 if data is complete spell d i =1 if incomplete

230
230 Recall, that f(s) is the PDF for someone who dies at period s F(t) is the probability you die by t 1-F(t) = the probability you die after (t)

231
231 If d i =1 then we observe f(t i ), someone who died in period t i If d i =0 then someone lived past period t i and the probability of that is [1-F(t i )] L = Σ i {d i ln[f(t i )] + (1-d i )ln[1-F(t i )]}

232
232 Introducing covariates Look at exponential λ(t)= λ Allow this to vary across people λ i (t)= λ i But like Poisson, λ i is always positive Let λ i = exp(β 0 + x 1i β 1 + x 2i β 2 …. x ki β k )

233
233 In the Weibull λ(t) = αγt α-1 Allow it to vary across people λ i (t) = αγ i t α-1 γ i = exp(β 0 + x 1i β 1 + x 2i β 2 …. x ki β k )

234
234 Interpreting Coefficients This is the same for both Weibull and Exponential In Weibull, λ(t i ) = αγ i t α-1 Suppose x 1i is a dummy variable When x i1 =1, then γ i1 = e β0 + β1 + x2i β2 …. xki βk When x i1 =0, then γ i0 = e β0 + x2i β2 …. xki βk

235
235 When you construct the ratio of γ i1 / γ i0, all the others parameters cancel, so (αγ i1 t α-1 – αγ i0 t α-1 ) / αγ i0 t α-1 = e β1 -1 Percentage change in the hazard when x 1i turns from 0 to 1. STATA prints out e β1, just subtract 1

236
236 Suppose x 2i is continuous Suppose we increase x 2i by 1 unit γ i1 = exp(β 0 + β 1 x 1i + x 2i β 2 …. x ki β k ) γ i2 = exp(β 0 + β 1 (x 1i +1) + x 2i β 2 …. x ki β k ) Can show that (αγ i1 t α-1 – αγ i0 t α-1 ) / αγ i0 t α-1 = e β1 -1 = exp(β 2 ) – 1 Percentage change in the hazard for 1 unit increase in x

237
237 NHIS Multiple Cause of Death NHIS –annual survey of 60K households –Data on individuals –Self-reported healthm DR visits, lost workdays, etc. MCOD –Linked NHIS respondents from 1986-1994 to National Death Index through Dec 31, 1995 –Identified whether respondent died and of what cause

238
238 Our sample –Males, 50-70, who were married at the time of the survey –1987-1989 surveys –Give everyone 5 years (60 months) of followup

239
239 Key Variables max_mths maximum months in the survey. Diedin5 respondent died during the 5 years of followup Note if diedn5=0, the max_mths=60. Diedin5 identifies whether the data is censored or not.

240
240 Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- age_s_yrs | 26654 59.42586 5.962435 50 70 max_mths | 26654 56.49077 11.15384 0 60 black | 26654.0928566.2902368 0 1 hispanic | 26654.0454716.20834 0 1 income | 26654 3.592181 1.327325 1 5 -------------+-------------------------------------------------------- educ | 26654 2.766677.961846 1 4 diedin5 | 26654.1226082.3279931 0 1

241
241 Duration Data in STATA Need to identify which is the duration data stset length, failure(failvar) Length=duration variable Failvar=1 when durations end in failure, =0 for censored values If all data is uncensored, omit failure(failvar)

242
242 In our case stset max_mths, failure(diedin5)

243
243 Getting Kaplan-Meier Curves Tabular presentation of results sts list Graphical presentation sts graph Results by subgroup sts graph, by(educ)

244
244

245
245

246
246 MLE of duration model with Covariates Basic syntax streg covariates, d(distribution) streg age_s_yrs black hispanic _Ie* _Ii*, d(weibull); In this model, STATA will print out exp(β) If you want the coefficients, add ‘nohr’ option (no hazard ratio)

247
247 Weibull coefficients No. of subjects = 26631 Number of obs = 26631 No. of failures = 3245 Time at risk = 1505705 LR chi2(10) = 595.74 Log likelihood = -12425.055 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ _t | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- age_s_yrs |.0452588.0031592 14.33 0.000.0390669.0514508 black |.4770152.0511122 9.33 0.000.3768371.5771932 hispanic |.1333552.082156 1.62 0.105 -.0276676.294378 _Ieduc_2 |.0093353.0591918 0.16 0.875 -.1066786.1253492 _Ieduc_3 | -.072163.0503131 -1.43 0.151 -.1707748.0264488 _Ieduc_4 | -.1301173.0657131 -1.98 0.048 -.2589126 -.0013221 _Iincome_2 | -.1867752.0650604 -2.87 0.004 -.3142914 -.0592591 _Iincome_3 | -.3268927.0688635 -4.75 0.000 -.4618627 -.1919227 _Iincome_4 | -.5166137.0769202 -6.72 0.000 -.6673747 -.3658528 _Iincome_5 | -.5425447.0722025 -7.51 0.000 -.684059 -.4010303 _cons | -9.201724.2266475 -40.60 0.000 -9.645945 -8.757503 -------------+---------------------------------------------------------------- /ln_p |.1585315.0172241 9.20 0.000.1247729.1922901 -------------+---------------------------------------------------------------- p | 1.171789.020183 1.132891 1.212022 1/p |.8533961.014699.8250675.8826974 ------------------------------------------------------------------------------

248
248 The sign of the parameters is informative –Hazard increasing in age –Blacks, hispanics have higher mortality rates –Hazard decreases with income and age The parameter ρ = 1.17. –Check 95% confidence interval (1.13, 1.21). Can reject null p=1 (exponential) –Hazard is increasing over time

249
249 Hazard ratios No. of subjects = 26631 Number of obs = 26631 No. of failures = 3245 Time at risk = 1505705 LR chi2(10) = 595.74 Log likelihood = -12425.055 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ _t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- age_s_yrs | 1.046299.0033055 14.33 0.000 1.03984 1.052797 black | 1.611258.082355 9.33 0.000 1.457667 1.781032 hispanic | 1.142656.093876 1.62 0.105.9727116 1.342291 _Ieduc_2 | 1.009379.059747 0.16 0.875.8988145 1.133544 _Ieduc_3 |.9303792.0468103 -1.43 0.151.8430114 1.026802 _Ieduc_4 |.8779924.0576956 -1.98 0.048.7718905.9986788 _Iincome_2 |.8296302.0539761 -2.87 0.004.7303062.9424625 _Iincome_3 |.7211611.0496617 -4.75 0.000.6301089.8253706 _Iincome_4 |.5965372.0458858 -6.72 0.000.5130537.6936049 _Iincome_5 |.5812672.041969 -7.51 0.000.5045648.6696297 -------------+---------------------------------------------------------------- /ln_p |.1585315.0172241 9.20 0.000.1247729.1922901 -------------+---------------------------------------------------------------- p | 1.171789.020183 1.132891 1.212022 1/p |.8533961.014699.8250675.8826974 ------------------------------------------------------------------------------

250
250 Interpret coefficients Age: every year hazard increases by 4.6% Black: have 61% greater hazard than whites Hispanics: 14% greater hazard than non-hispanics Educ 2, 3, 4 are some 9-11, 12-15 and 16+ years of school

251
251 Educ 3: those with 12-15 years of educ have.93 – 1 = -0.07 or a 7% lower hazard than those with <9 years of school Educ 4: those with a college degree have 0.88 – 1 = -0.12 or a 12% lower hazard than those with <9 years of school

252
252 Income 2-5 are dummies for people with $10-$20K, $20-$30K, $30-$40K, >$40K Income 2: Those with $10-$20K have 0.83 – 1 = -0.17 or a 17% lower hazard than those with income <$10K Income 5, those with >$40K in income have a 0.58 – 1 = -0.42 or a 42% lower hazard than those with income <$10K

253
253 Topics not covered Time varying covariates Competing risk models –Die from multiple causes Cox proportional hazard model –Heterogeneity in baseline hazard

Similar presentations

Presentation is loading. Please wait....

OK

Hypothesis Testing:.

Hypothesis Testing:.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on id ego superego definition Ppt on body language in interview Ppt on steps in designing hrd system Ppt on barack obama biography Ppt on indian politics nehru Ppt on c language fundamentals worksheets Ppt on layer 3 switching protocols Ppt on political parties and electoral process in kenya Ppt on teaching learning materials Ppt on expository text for kids