Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced Panel Data Techniques

Similar presentations


Presentation on theme: "Advanced Panel Data Techniques"— Presentation transcript:

1 Advanced Panel Data Techniques
Econometrics Advanced Panel Data Techniques

2 Advanced Panel Data Topics
Fixed Effects estimation STATA stuff: xtreg Autocorrelation/Cluster correction But first! Review of heteroskedasticity Probably won’t get to details: Random Effects estimation Hausman test Other kinds of panel data

3 Panel Data with two periods
Notation: yit = 0 + 0d2t + 1xit1 +…+ kxitk + ai + uit ai = “person effect” (etc) has no “t” subscript All unobserved influences which are fixed for a person over time (e.g., “ability”) uit = “idiosyncratic error” vit ai = time-constant component of the composite error,  third subscript: variable # Dummy for t= 2 (intercept shift) Person (firm, etc) i… …in period t

4 Fixed Effects Estimation
Two periods of data. The population model is yit = 0 + 0d2t + 1xit1 +…+ kxitk + ai + uit ai is unknown, but we could estimate it… Estimate âi by including a dummy variable for each individual, i! For example, in a dataset with 46 cities each observed in two years (1982, 1987) we would have 45 dummies (equal to one for only two observations each) d2t is a dummy for the later year (e.g., 1987)

5 crmrte unem d87 dcity1 dcity2 dcity45 73.3 14.9 1 63.7 7.7 169.3 9.1 164.5 2.4 96.1 11.3 120.0 3.9 116.3 5.3 169.5 4.6 70.8 6.9 72.5 6.2

6 Fixed Effects Estimation
We are essentially estimating: yit = 0 + 0d2t + 1xit1 +…+ kxitk + a1d(i=1) + a2d(i=2) + … + a45d(i=45) + uit But, for short, we just write, yit = 0 + 0d2t + 1xit1 +…+ kxitk + ai + uit Estimated âi are the slope coefficients on these dummy variables These are called “fixed effects” The dummies control for anything – including unobservable characteristics – about an individual which is fixed over time

7 More on fixed effects… In two-period data (only), including fixed effects equivalent to differencing That is, either way should get you exactly the same slope estimates Can see this in difference eq’s for predicted values: Period 2: ŷi2 = b0 + d0∙1 +b1xi21 +…+ bkxi2k + âi Period 1: ŷi1 = b0 + d0∙0 +b1xi11 +…+ bkxi1k + âi Diff: ŷi = d0 +b1xi1 +…+ bkxik Intercept in differences same as coefficient on year dummy

8 Fixed Effects In STATA: three ways
In STATA, can estimate fixed effects by creating dummies for every individual and including them in your regression E.g.: tab city, gen(citydummy) The “areg” command does the same, w/o showing the dummy coefficients (the “fixed effects”) [we don’t usually care anyway!] a = “absorb” the fixed effects Syntax: areg y x yrdummies, absorb(city) xtreg …, fe (below) Variable identifying cross-sectional units

9 Fixed effects regression
. areg crmrte unem d87, absorb(area) robust Linear regression, absorbing indicators Number of obs = F( 2, 44) = Prob > F = R-squared = Adj R-squared = Root MSE = | Robust crmrte | Coef. Std. Err t P>|t| [95% Conf. Interval] unem | d87 | _cons | area | absorbed (46 categories)

10 First difference regression c = “change” = 
. reg ccrmrte cunem, robust Linear regression Number of obs = F( 1, 44) = Prob > F = R-squared = Root MSE = | Robust ccrmrte | Coef. Std. Err t P>|t| [95% Conf. Interval] cunem | _cons | But notice it looks like, weakly, the crime rate went up between 1982 and 1987 Same as coefficienst on unemp, d87 in fixed effects estimates!

11 Fixed effects vs. first differences
First difference and fixed effects (f.e.) equivalent only when there are exactly two periods of data When there are more than two periods of data, f.e. equivalent to “demeaning” data Fixed effects model: Individuals’ means over t: Difference… Take some time to explain what you’re doing here

12 Fixed Effects vs. First Differences
Textbook writes as i.e. where etc. Also known as the “within” estimator Idea: only using variation “within” individuals (or other cross-sectional units) over time, and not the variation “between” individuals “Between” estimator, in contrast, uses just the means, and none of the variation over time:

13 With T>2… First Differences vs Fixed Effects
F.E. estimation more common than differences Probably because it’s easier to do (no “differencing” required) not necessarily because it’s better Advantages: Fixed effects easily implemented for “unbalanced” panels (not all individuals are observed in all periods) Also pooled cross-section: don’t need to see the same ‘individuals’ over time, just, e.g., the same cities Fixed effects estimation is more efficient than differencing if no autocorrelation of in the uit’s Intuition: first differences (estimating in changes) removes more of the individual variation over time than fixed effects

14 Aside: fixed effects using “xtreg” command
In STATA, there are a powerful set of commands, beginning with “XT,” which allow you do carry out many panel techniques (fixed effect, random effects – below) Step 0 in using these commands: tell STATA the name of the variables that contain… The cross-sectional unit of observation (“X”) The time-series (period) unit of observation (“T”) Command is: xtset xsecvar timevar e.g., year e.g., city, person, firm

15 xtset: “area” is the cross section unit variable
. xtset area year, delta(5) panel variable: area (strongly balanced) time variable: year, 82 to 87 delta: 5 units “area” is the cross section unit variable “year” is the year variable delta(5) is an option tells STATA that a one-unit change in time is 5 years (in this case – 82 to 87) After that, can run xtreg…

16 . xtreg crmrte unem d87, fe Fixed-effects (within) regression Number of obs = 92 Group variable: area Number of groups = 46 R-sq: within = Obs per group: min = 2 between = avg = 2.0 overall = max = 2 F(2,44) = 5.37 corr(u_i, Xb) = Prob > F = crmrte | Coef. Std. Err. t P>|t| [95% Conf. Interval] unem | d87 | _cons | sigma_u | sigma_e | rho | (fraction of variance due to u_i) F test that all u_i=0: F(45, 44) = 7.97 Prob > F = Could discuss all the stuff on this slide

17 First Differences After you run the Xtset command, you can also do first differences with a “D.” in front of any variable: . reg D.crmrte D.unem Source | SS df MS Number of obs = F( 1, 44) = Model | Prob > F = Residual | R-squared = Adj R-squared = Total | Root MSE = D.crmrte | Coef. Std. Err t P>|t| [95% Conf. Interval] unem | D1. | _cons | Not xtreg: reg D.crmrte D.unem

18 it -- composite error term
Autocorrelation it -- composite error term Yit = β0 + β1Xit1 + … ai + uit Model implies autocorrelation = errors correlated across periods ai is perfectly correlated over time, for example (uit’s may also have autocorrelation) We continue assume no error correlation between different individuals Consequences similar to heteroskedasticity…: OLS calculates biased standard errors OLS is inefficient (not “BLUE”) Heteroskedasticity review/clicker questions before this…

19 Aside: se formula derived… (bivariate case); N = #people; T = # of time periods
Other time periods (if T>2) OLS standard errors calculated assuming: Homoskedasticity  No autocorrelation  Cov(different v’s) = 0 This vastly simplifies the formula

20 1. “cluster” correction To correct OLS se’s for the possibility of correlation of errors over time, “cluster” Clustering on “person” variable (i) adds error interaction terms from above to se calculation: w/ = person i, period residual This provides consistent se’s if you have a large # of people to average over (N big) True se formula has cov(vi1,vi2); averaged over large #of people, distinction not important So does “robust,” too when you do cluster

21 Cross-sectional unit with autocorrelation across periods
“Cluster” In STATA: reg crmrte lpolpc, cluster(area) Usually makes standard errors larger because cov(vi1,vi2)>0 Other intuition: OLS treats every observation as independent information about the relationship With autocorrelation, that’s not true – observations are not independent Estimates are effectively based on less information, they should be less precise

22 With and without “cluster” correction
. reg crmrte lpolpc d87, robust noheader | Robust crmrte | Coef. Std. Err t P>|t| [95% Conf. Interval] lpolpc | d87 | _cons | . reg crmrte lpolpc d87, cluster(area) noheader (Std. Err. adjusted for 46 clusters in area) lpolpc | d87 | _cons | Coefficient estimates exactly the same – both OLS Standard errors larger

23 2. Efficiency Correction = “Random Effects”
“Random effects” is a data transformation to get rid of autocorrelation in errors Like WLS or feasible GLS, random effects transforms data to produce error w/o autocorrelation Transformed data meets Gauss-Markov assumptions, and so estimates are efficient If other Gauss-Markov assumptions hold, random effects will be unbiased and “BLUE” Important: random effects assumes ** ai is uncorrelated with the x’s ** If not, random effects estimates are biased (o.v. bias!) and inconsistent

24 What is Random Effects? Define it = ai + uit
Interpretation: fraction of error variance due to factors fixed across periods If = 0, then  = 1. “Quasi-demeaning” by this factor gets rid of error correlation across periods Not obvious why this gets rid of the autocorelation, but it does. Slopes theoretically same, (though OLS estimates may not be if errors correlated w/X’s) (can show): new composite error has no correlation across periods

25 What is Random Effects? Also can be interpreted as a weighted average of OLS and fixed effects If  = 1 (error = fixed), then random effects = regular demeaning = fixed effects If  = 0 (error = entirely unfixed – no autocorrelation) – pure OLS, ignoring panel structure Not obvious why this gets rid of the autocorelation, but it does.

26 Example: panel of firms from 1987-89
Time variable storage display value variable name type format label variable label year int %9.0g , 1988, or 1989 fcode float %9.0g firm code number employ int %9.0g # employees at plant sales float %9.0g annual sales, $ avgsal float %9.0g average employee salary scrap float %9.0g scrap rate (per 100 items) rework float %9.0g rework rate (per 100 items) tothrs int %9.0g total hours training union byte %9.0g =1 if unionized grant byte %9.0g = 1 if received grant d byte %9.0g = 1 if year = 1989 d byte %9.0g = 1 if year = 1988 hrsemp float %9.0g tothrs/totrain . xtset fcode year Cross-section variable

27 More on XTreg in STATA After doing “xtset,” “xtreg” same as “reg” but can do panel techniques Random effects (default): xtreg y x1…, re Fixed effects: xtreg y x1 x2…, fe Xtreg can handle all the other stuff that we have used in “reg”: Robust, cluster, weights, etc.

28 . xtreg lscrap hrsemp lsales tothrs union d89 d88, re
. xtset fcode year . xtreg lscrap hrsemp lsales tothrs union d89 d88, re Random-effects GLS regression Number of obs = Group variable: fcode Number of groups = R-sq: within = Obs per group: min = between = avg = overall = max = Random effects u_i ~ Gaussian Wald chi2(6) = corr(u_i, X) = 0 (assumed) Prob > chi = lscrap | Coef. Std. Err z P>|z| [95% Conf. Interval] hrsemp | lsales | tothrs | union | d89 | d88 | _cons | Could describe these things. U_i represents ai in our model!

29 Fixed vs. Random Effects
Always remember: random effects estimated assuming ai uncorrelated with X’s Unlike fixed effects, r.e. does not remove any omitted variables bias; just more “efficient” assuming no omitted variables bias Put another way: random effects can only reduce standard errors, not bias The ai assumption is testable! If it holds, fixed effects estimates should be statistically indistiguishable from r.e. estimates

30 Hausman test Hausman test intuition:
H0: cov(ai,xit)=0; estimate with random effects since it’s the most efficient under this assumption Then estimate with fixed effects, and if the coefficient estimates are significantly different reject then null IMPORTANT: as always, failure to reject the null ≠ there is no bias in random effects We never “accept the null” (we just lack sufficient evidence to reject it) For example, both random effects and fixed effects could be biased by a similar amount; or standard errors just big Can skip these slides if out of time

31 Hausman test More broadly, Hausman tests are specification tests comparing two estimators where… One estimator is efficient (in this case, random effects) if the null hypothesis is true (cov[ai,xi]=0) One estimator is consistent (in this case, fixed effects) if the null hypothesis is false Related to latter, important caveat on this test and all Hausman tests: We must assume (without being able to test!) that there is no omitted variables bias in the alternative (fixed effects) estimator Reasoning: need an unbiased estimate of the “true” slopes If not true, Hausman test tells us nothing Can skip these slides if out of time

32 Hausman test in STATA How do you compare your fixed effects and random effects estimates? Steps: Save your fixed effects and random effects estimates using the “estimates store” command (example below) Feed to the “Hausman” command Hausman command calculates standard errors on difference in whole list of coefficient estimates and tests whether they are jointly significantly different

33 Hausman test example: Does job training reduce “scrap” (error) rates: re or fe?
. xtset fcode year . xtreg lscrap hrsemp lsales tothrs union d89 d88, re Random-effects GLS regression Number of obs = Group variable: fcode Number of groups = R-sq: within = Obs per group: min = between = avg = overall = max = Random effects u_i ~ Gaussian Wald chi2(6) = corr(u_i, X) = 0 (assumed) Prob > chi = lscrap | Coef. Std. Err z P>|z| [95% Conf. Interval] hrsemp | lsales | tothrs | union | d89 | d88 | _cons |

34 Estimates store After any regression command, can save the estimates for later. Purposes: Look at the results again later (some regressions take a long time to estimate) - “estimates replay” Feed to another command (like Hausman). Here, after estimating by random effects, type: . estimates store reff -- stores ests in “reff” (More generally, estimates store anyname)

35 Fixed effects: . xtreg lscrap hrsemp lsales tothrs union d89 d88, fe
Fixed-effects (within) regression Number of obs = Group variable: fcode Number of groups = R-sq: within = Obs per group: min = between = avg = overall = max = F(5,83) = corr(u_i, Xb) = Prob > F = lscrap | Coef. Std. Err t P>|t| [95% Conf. Interval] hrsemp | lsales | tothrs | union | (dropped) d89 | d88 | _cons | . estimates store feff Why does “union” get dropped?

36 Hausman test STATA command
Syntax: hausman consistent_est efficient_est …: . hausman feff reff ---- Coefficients ---- | (b) (B) (b-B) sqrt(diag(V_b-V_B)) | feff reff Difference S.E. hrsemp | lsales | tothrs | d89 | d88 | b = consistent under Ho and Ha; obtained from xtreg B = inconsistent under Ha, efficient under Ho; obtained from xtreg Test: Ho: difference in coefficients not systematic chi2(5) = (b-B)'[(V_b-V_B)^(-1)](b-B) = Prob>chi2 = Shows coefficient estimates from the two methods, the difference, and the standard error on the difference. Large p-value: fail to reject H0

37 Fixed Effects or Random?
My view: Don’t use random effects. Random effects is just an efficiency correction Key assumption that fixed unobservables are uncorrelated with x’s is almost always implausible My mantra: if you are worried about efficiency, you just don’t have enough data Bottom line: just correct the standard errors using “cluster” and forget about efficiency Analog of my view on heteroskedasticity

38 Autocorrelation/heteroskedasticity: Problems and solutions
OLS SE’s biased “robust” produces consistent SE’s “cluster” produces consistent SE’s OLS inefficient- could get smaller SE’s from same data Weighted least squares (or “feasible GLS”) Random effects Not a good idea: requires implausible assumption that there is no ov bias from fixed unobservables Not a good idea except in cases when you know the form of heteroskedasticity (prone to manipulation)

39 Other Uses of Panel Methods
It’s possible to think of models where there is an unobserved fixed effect, even if we do not have true panel data A common example: observe different members of the same family (but not necessarily over time) Or individual plants of larger firms, etc. We think there is an unobserved family effect Examples: difference siblings, twins, etc. Can estimate “family fixed effect” model Skip to this slide if you are running out of time.


Download ppt "Advanced Panel Data Techniques"

Similar presentations


Ads by Google