Presentation is loading. Please wait.

Presentation is loading. Please wait.

Functional Form, Scaling and Use of Dummy Variables Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 10-1.

Similar presentations


Presentation on theme: "Functional Form, Scaling and Use of Dummy Variables Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 10-1."— Presentation transcript:

1 Functional Form, Scaling and Use of Dummy Variables Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 10-1

2 Scaling the Data Y hat = 40.76 + 0.1283X Y = Consumption in $ X = Income in $ Interpret the equation

3 Suppose we change the units of measurement of income (X) to $100 increases  We have scaled the data Choice of scale does not affect measurement of underlying relationship but affects interpretation of coefficients.

4 Now the equation becomes Yhat = 40.77 + 12.83x What we did was divided income by 100 so the coefficient of income becomes 100 times larger. Yhat = 40.77 + (100 * 0.1283)(x/100)

5 Scaling X alone changes the slope coefficient Changes the standard error of the coefficient by the same factor T ratio is unaffected. All other regression statistics are unchanged

6 Suppose we change the measurement of Y but not X All coefficients must change in order for equation to remain valid E.g. If Consumption is measured in cents instead of $ 100 y hat = (100*40.77) + (100*.1283)x Yhat* = 4077 + 12.83 X

7 Changing the scale of Y alone  All coefficients must change Scales standard errors of the coefficients accordingly T-ratios and R sq is unchanged

8 If X and Y are changed by the same factor  No change in regression results for slope but estimated intercept will change T and Rsq. are unaffected

9 Consider the following regressions yi =  0 +  1xi +  i Yi =  0* +  1* Xi +  i yi is measured in inches Yi is measured in ft. (12 inches) xi is measured in cm. Xi is measured in inches. (2.54 cm)

10 If estimated  0 = 10, what is the estimated  0* = If estimated.  1* = 22, what is the estimated  1=

11 Dummy Variables Used to capture qualitative explanatory variables Used to capture any event that has only two possible outcomes e.g. race, gender, geographic region of residence etc.

12 Use of Intercept Dummy Most common use of dummy variables. Modifies the regression model intercept parameter e.g. Let test the “location”, “location” “location” model of real estate Suppose we take into account location near say a university or golf course

13 P t = β o + β 1 S t +β 2 D t + ε t St = square footage D = dummy variable to represent if the characteristic is present or not D = 1if property is in a desirable neighborhood 0if not in a desirable neighborhood

14 Effect of the dummy variable is best seen by examining the E(Pt). If model is specified correctly, E(ε t ) = 0 E(P t ) = ( β o + β 2 ) + β 1 S t when D=1 β o + β 1 S t when D = 0

15 B2 is the location premium in this case. It is the difference between the Price of a house in a desirable are and one in a not so desirable area, all things held constant The dummy variable is to capture the shift in the intercept as a result of some qualitative variable  Dt is an intercept dummy variable

16 Dt is treated as any explanatory variable. You can construct a confidence interval for B2 You can test if B2 is significantly different from zero. In such a test, if you accept Ho, then there is no difference between the two categories.

17 Application of Intercept Dummy Variable Wages = B0 + B1EXP + B2RACE +B3SEX + Et Race = 1 if white 0 if non white Sex = 1 if male 0 if female

18 WAGES = 40,000 + 1487EXP + 1102RACE +1082SEX Mean salary for black female 40,000 + 1487 EXP Mean salary for white female 41,102 + 1487EXP +1102

19 Mean salary for Asian male Mean salary for white male What sucks more, being female or non white?

20 Determining the # of dummies to use If h categories, then use h-1 dummies Category left out defines reference group If you use h dummies you’d fall into the dummy trap

21 Slope Dummy Variables Allows for different slope in the relationship Use an interaction variable between the actual variable and a dummy variable e.g. Pt = Bo + B1Sqfootage+B2(Sqfootage*D)+et D= 1 desirable area, 0 otherwise

22 Captures the effect of location and size on the price of a house E(Pt) = B0 + (B1+B2)Sqfoot if D=1 = BO + B1Sqfoot if D = 0  in the desirable area, price per square foot is b1+b2, and it is b1 in other areas If we believe that a house location affects both the intercept and the slope then the model is

23 Pt = B0 +B1sqfoot +B2(sqfoot*D) + B3D +et

24 24 Dummies for Multiple Categories We can use dummy variables to control for something with multiple categories Suppose everyone in your data is either a HS dropout, HS grad only, or college grad To compare HS and college grads to HS dropouts, include 2 dummy variables hsgrad = 1 if HS grad only, 0 otherwise; and colgrad = 1 if college grad, 0 otherwise

25 25 Multiple Categories (cont) Any categorical variable can be turned into a set of dummy variables Because the base group is represented by the intercept, if there are n categories there should be n – 1 dummy variables If there are a lot of categories, it may make sense to group some together Example: top 10 ranking, 11 – 25, etc.

26 26 Interactions Among Dummies Interacting dummy variables is like subdividing the group Example: have dummies for male, as well as hsgrad and colgrad Add male*hsgrad and male*colgrad, for a total of 5 dummy variables –> 6 categories Base group is female HS dropouts hsgrad is for female HS grads, colgrad is for female college grads The interactions reflect male HS grads and male college grads

27 27 More on Dummy Interactions Formally, the model is y =  0 +  1 male +  2 hsgrad +  3 colgrad +  4 male*hsgrad +  5 male*colgrad +  1 x + u, then, for example: If male = 0 and hsgrad = 0 and colgrad = 0 y =  0 +  1 x + u If male = 0 and hsgrad = 1 and colgrad = 0 y =  0 +  2 hsgrad +  1 x + u If male = 1 and hsgrad = 0 and colgrad = 1 y =  0 +  1 male +  3 colgrad +  5 male*colgrad +  1 x + u

28 28 Other Interactions with Dummies Can also consider interacting a dummy variable, d, with a continuous variable, x y =  0 +  1 d +  1 x +  2 d*x + u If d = 0, then y =  0 +  1 x + u If d = 1, then y = (  0 +  1 ) + (  1 +  2 ) x + u This is interpreted as a change in the slope

29 29 y x y =  0 +  1 x y = (  0 +  0 ) + (  1 +  1 ) x Example of  0 > 0 and  1 < 0 d = 1 d = 0

30 Multicollinearity Omitted Variables Bias is a problem when the omitted variable is an explanator of Y and correlated with X 1 Including the omitted variable in a multiple regression solves the problem. The multiple regression finds the coefficient on X 1, holding X 2 fixed.

31 Multicollinearity (cont.) Multivariate Regression finds the coefficient on X 1, holding X 2 fixed. To estimate  1, OLS requires: Are these conditions always possible?

32 Multicollinearity (cont.) To strip out the bias caused by the correlation between X 1 and X 2, OLS has to impose the restriction This restriction in essence removes those parts of X 1 that are correlated with X 2 If X 1 is very correlated with X 2, OLS doesn’t have much left-over variation to work with. If X 1 is perfectly correlated with X 2, OLS has nothing left.

33 Multicollinearity (cont.) Suppose X 2 is simply a function of X 1 For some silly reason, we want to estimate the returns to an extra year of education AND the returns to an extra month of education. So we stick in two variables, one recording the number of years of education and one recording the number of months of education.

34 Multicollinearity (cont.)

35 Let’s look at this problem in terms of our unbiasedness conditions. No weights can do both these jobs!

36 Multicollinearity (cont.) Bottom Line: you CANNOT add variables that are perfectly correlated with each other (and nearly perfect correlation isn’t good). You CANNOT include a group of variables that are a linear combination of each other: You CANNOT include a group of variables that sum to 1 and also include a constant.

37 Multicollinearity (cont.) Multicollinearity is easy to fix. Simply omit one of the troublesome variables. Maybe you can find more data for which your variables are not multicollinear. This isn’t possible if your variables are weighted sums of each other by definition.

38 Checking Understanding You have a cross-section of workers from 1999. Which of the following variables would lead to multicollinearity? 1.A Constant, Year of birth, Age 2.A Constant, Year of birth, Years since they finished high school 3.A Constant, Year of birth, Years since they started working for their current employer

39 Checking Understanding (cont.) 1.A Constant, Year of Birth, and Age will be a problem. These variables will be multicollinear (or nearly multicollinear, which is almost as bad).

40 Checking Understanding (cont.) 2.A Constant, Year of Birth, and Years Since High School PROBABLY suffers from ALMOST perfect multicollinearity. Most Americans graduate from high school around age 18. If this is true in your data, then

41 Checking Understanding (cont.) 3.A Constant, Birthyear, Years with Current Employer is very unlikely to be a problem. There is usually ample variation in the ages at which different workers begin their employment with a particular firm.

42 Multicollinearity When two or more of the explanatory variables are highly related (correlated) Collinearity exists so the question is how much before it becomes a problem. Perfect multicollinearity Imperfect Multicollinearity

43 Using the Ballantine

44 Detecting Multicollinearity 1.Check simple correlation coefficients (r) If |r| > 0.8, then multicollinearity may be a problem 2.Perform a t-test at on the correlation coefficient

45 3.Check Variance Inflation Factors (VIF) or the Tolerance (TOL) Run a regression of each X on the other Xs Calculate the VIF for each B hati

46 The higher VIF, the severity of the problem of multicollinearity If VIF is greater than 5, then there might be a problem (arbitrarily chosen)

47 Tolerance (TOR) = (1 – Rsq) 0 < TOR < 1 If TOR is close to zero then multicollinearity is severe. You could use VIF or TOR.

48 EFFECTS OF MULTICOLLINEARITY 1.OLS estimates are still unbiased 2.Standard error of the estimated coefficients will be inflated 3.t- statistics will be small 4.Estimates will be sensitive to small changes, either from dropping a variable or adding a few more observations

49 With multicollinearity, you may accept Ho for all your t-test but reject Ho for you F-test

50 Dealing with Multicollinearity 1.Ignore It. Do this if multicollinearity is not causing any problems. i.e. if the t-statistics are insignificant and unreliable then do something. If not, do nothing

51 2.Drop a variable. If two variables are significantly related, drop one of them (redundant) 3.Increase the sample size The larger the sample size the more accurate the estimates

52 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 10-52 Review Perfect multicollinearity occurs when 2 or more of your explanators are jointly perfectly correlated. That is, you can write one of your explanators as a linear function of other explanators:

53 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 10-53 Review (cont.) OLS breaks down with perfect (or even near perfect) multicollinearity. Multicollinearity most frequently occurs when you want to include: – Time, age, and birthyear effects – A dummy variable for each category, plus a constant

54 Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 10-54 Review (cont.) Dummy variables (also called binary variables) take on only the values 0 or 1. Dummy variables let you estimate separate intercepts and slopes for different groups. To avoid multicollinearity while including a constant, you need to omit the dummy variable for one group (e.g. males or non- Hispanic whites). You want to pick one of the larger groups to omit.


Download ppt "Functional Form, Scaling and Use of Dummy Variables Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 10-1."

Similar presentations


Ads by Google