Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Use of Dummy Variables. In the examples so far the independent variables are continuous numerical variables. Suppose that some of the independent.

Similar presentations


Presentation on theme: "The Use of Dummy Variables. In the examples so far the independent variables are continuous numerical variables. Suppose that some of the independent."— Presentation transcript:

1 The Use of Dummy Variables

2 In the examples so far the independent variables are continuous numerical variables. Suppose that some of the independent variables are categorical. Dummy variables are artificially defined variables designed to convert a model including categorical independent variables to the standard multiple regression model.

3 Example: Comparison of Slopes of k Regression Lines with Common Intercept

4 Situation: k treatments or k populations are being compared. For each of the k treatments we have measured both –Y (the response variable) and –X (an independent variable) Y is assumed to be linearly related to X with –the slope dependent on treatment (population), while –the intercept is the same for each treatment

5 The Model:

6 This model can be artificially put into the form of the Multiple Regression model by the use of dummy variables to handle the categorical independent variable Treatments. Dummy variables are variables that are artificially defined

7 In this case we define a new variable for each category of the categorical variable. That is we will define X i for each category of treatments as follows :

8 Then the model can be written as follows: The Complete Model: where

9 In this case Dependent Variable: Y Independent Variables: X 1, X 2,..., X k

10 In the above situation we would likely be interested in testing the equality of the slopes. Namely the Null Hypothesis (q = k – 1)

11 The Reduced Model: Dependent Variable: Y Independent Variable: X = X 1 + X 2 +... + X k

12 Example: In the following example we are measuring –Yield Y as it depends on –the amount (X) of a pesticide. Again we will assume that the dependence of Y on X will be linear. (I should point out that the concepts that are used in this discussion can easily be adapted to the non- linear situation.)

13 Suppose that the experiment is going to be repeated for three brands of pesticides: A, B and C. The quantity, X, of pesticide in this experiment was set at 4 different levels: –2 units/hectare, –4 units/hectare and –8 units per hectare. Four test plots were randomly assigned to each of the nine combinations of test plot and level of pesticide.

14 Note that we would expect a common intercept for each brand of pesticide since when the amount of pesticide, X, is zero the four brands of pesticides would be equivalent.

15 The data for this experiment is given in the following table: 248 A29.6328.1628.45 31.8733.4837.21 28.0228.1335.06 35.2428.2533.99 B32.9529.5544.38 24.7434.9738.78 23.3836.3534.92 32.0838.3827.45 C28.6833.7946.26 28.7043.9550.77 22.6736.8950.21 30.0233.5644.14

16

17 PesticideX (Amount)X1X1 X2X2 X3X3 Y A220029.63 A220031.87 A220028.02 A220035.24 B202032.95 B202024.74 B202023.38 B202032.08 C200228.68 C200228.70 C200222.67 C200230.02 A440028.16 A440033.48 A440028.13 A440028.25 B404029.55 B404034.97 B404036.35 B404038.38 C400433.79 C400443.95 C400436.89 C400433.56 A880028.45 A880037.21 A880035.06 A880033.99 B808044.38 B808038.78 B808034.92 B808027.45 C800846.26 C800850.77 C800850.21 C800844.14 The data as it would appear in a data file. The variables X 1, X 2 and X 3 are the “dummy” variables

18 Fitting the complete model : ANOVA dfSSMSFSignificance F Regression31095.815813365.271937818.331147884.19538E-07 Residual32637.641575419.92629923 Total351733.457389 Coefficients Intercept26.24166667 X1X1 0.981388889 X2X2 1.422638889 X3X3 2.602400794

19 Fitting the reduced model : ANOVA dfSSMSFSignificance F Regression1623.8232508 19.114399780.000110172 Residual341109.63413832.63629818 Total351733.457389 Coefficients Intercept26.24166667 X1.668809524

20 The Anova Table for testing the equality of slopes dfSSMSFSignificance F common slope zero 1623.8232508 31.30652833.51448E-06 Slope comparison 2471.9925627235.996281311.843457660.000141367 Residual32637.641575419.92629923 Total351733.457389

21 Example: Comparison of Intercepts of k Regression Lines with a Common Slope (One-way Analysis of Covariance)

22 Situation: k treatments or k populations are being compared. For each of the k treatments we have measured both Y (then response variable) and X (an independent variable) Y is assumed to be linearly related to X with the intercept dependent on treatment (population), while the slope is the same for each treatment. Y is called the response variable, while X is called the covariate.

23 The Model:

24 Equivalent Forms of the Model: 1) 2)

25 This model can be artificially put into the form of the Multiple Regression model by the use of dummy variables to handle the categorical independent variable Treatments.

26 In this case we define a new variable for each category of the categorical variable. That is we will define X i for categories I i = 1, 2, …, (k – 1) of treatments as follows:

27 Then the model can be written as follows: The Complete Model: where

28 In this case Dependent Variable: Y Independent Variables: X 1, X 2,..., X k-1, X

29 In the above situation we would likely be interested in testing the equality of the intercepts. Namely the Null Hypothesis (q = k – 1)

30 The Reduced Model: Dependent Variable: Y Independent Variable: X

31 Example: In the following example we are interested in comparing the effects of five workbooks (A, B, C, D, E) on the performance of students in Mathematics. For each workbook, 15 students are selected (Total of n = 15×5 = 75). Each student is given a pretest (pretest score ≡ X) and given a final test (final score ≡ Y). The data is given on the following slide

32 The data The Model:

33 Graphical display of data

34 Some comments 1.The linear relationship between Y (Final Score) and X (Pretest Score), models the differing aptitudes for mathematics. 2.The shifting up and down of this linear relationship measures the effect of workbooks on the final score Y.

35 The Model:

36 The data as it would appear in a data file.

37 The data as it would appear in a data file with Dummy variables, (X1, X2, X3, X4 )added

38 Here is the data file in SPSS with the Dummy variables, (X1, X2, X3, X4 )added. The can be added within SPSS

39 Fitting the complete model The dependent variable is the final score, Y. The independent variables are the Pre-score X and the four dummy variables X 1, X 2, X 3, X 4.

40 The Output

41 The Output - continued

42 The interpretation of the coefficients The common slope

43 The interpretation of the coefficients The intercept for workbook E

44 The interpretation of the coefficients The changes in the intercept when we change from workbook E to other workbooks.

45 1.When the workbook is E then X 1 = 0,…, X 4 = 0 and The model can be written as follows: The Complete Model: 2.When the workbook is A then X 1 = 1,…, X 4 = 0 and hence  1 is the change in the intercept when we change form workbook E to workbook A.

46 Testing for the equality of the intercepts The reduced model The dependent variable in only X (the pre-score)

47 Fitting the reduced model The dependent variable is the final score, Y. The independent variables is only the Pre-score X.

48 The Output for the reduced model Lower R 2

49 The Output - continued Increased R.S.S

50 The F Test

51 The Reduced model The Complete model

52 The F test

53 Testing for zero slope The reduced model The dependent variables are X 1, X 2, X 3, X 4 (the dummies)

54 The Reduced model The Complete model

55 The F test

56 The Analysis of Covariance This analysis can also be performed by using a package that can perform Analysis of Covariance (ANACOVA) The package sets up the dummy variables automatically

57 Here is the data file in SPSS. The Dummy variables are no longer needed.

58 In SPSS to perform ANACOVA you select from the menu – Analysis->General Linear Model->Univariatee

59 This dialog box will appear

60 You now select: 1.The dependent variable Y (Final Score) 2.The Fixed Factor (the categorical independent variable – workbook) 3.The covariate (the continuous independent variable – pretest score)

61 Compare this with the previous computed table The output: The ANOVA TABLE

62 This is the sum of squares in the numerator when we attempt to test if the slope is zero (and allow the intercepts to be different) The output: The ANOVA TABLE

63 Another application of the use of dummy variables The dependent variable, Y, is linearly related to X, but the slope changes at one or several known values of X (nodes). Y X nodes

64 The model Y X x1x1 x2x2 xkxk 11 22 kk or

65 Now define Etc.

66 Then the model can be written

67 An Example In this example we are measuring Y at time X. Y is growing linearly with time. At time X = 10, an additive is added to the process which may change the rate of growth. The data

68 Graph

69 Now define the dummy variables

70 The data as it appears in SPSS – x1, x2 are the dummy variables

71 We now regress y on x1 and x2.

72 The Output

73 Graph

74 Testing for no change in slope Here we want to test H 0 :  1 =  2 vs H A :  1 ≠  2 The reduced model is Y =  0 +  1 (X 1 + X 2 ) +  =  0 +  1 X + 

75 Fitting the reduced model We now regress y on x.

76 The Output

77 Graph – fitting a common slope

78 The test for the equality of slope


Download ppt "The Use of Dummy Variables. In the examples so far the independent variables are continuous numerical variables. Suppose that some of the independent."

Similar presentations


Ads by Google