Download presentation

Presentation is loading. Please wait.

Published byBrooke Smailes Modified over 2 years ago

1
Conditional Test Statistics

2
Suppose that we are considering two Log- linear models and that Model 2 is a special case of Model 1. That is the parameters of Model 2 are a subset of the parameters of Model 1. Also assume that Model 1 has been shown to adequately fit the data.

3
In this case one is interested in testing if the differences in the expected frequencies between Model 1 and Model 2 is simply due to random variation] The likelihood ratio chi-square statistic that achieves this goal is:

4
Example

5
Goodness of Fit test for the all k-factor models Conditional tests for zero k-factor interactions

6
Conclusions 1.The four factor interaction is not significant G 2 (3|4) = 0.7 (p = 0.705) 2.The all three factor model provides a significant fit G 2 (3) = 0.7 (p = 0.705) 3.All the three factor interactions are not significantly different from 0, G 2 (2|3) = 9.2 (p = 0.239). 4.The all two factor model provides a significant fit G 2 (2) = 9.9 (p = 0.359) 5.There are significant 2 factor interactions G 2 (1|2) = 33.0 (p = 0.00083. Conclude that the model should contain main effects and some two-factor interactions

7
There also be a natural sequence of progressively complicated models that one might want to identify. In the laundry detergent example the variables are: 1.Softness of Laundry Used 2.Previous use of Brand M 3.Temperature of laundry water used 4.Preference of brand X over brand M

8
A natural order for increasingly complex models which should be considered might be: 1.[1][2][3][4] 2.[1][3][24] 3.[1][34][24] 4.[13][34][24] 5.[13][234] 6.[134][234] The all-Main effects model Independence amongst all four variables Since previous use of brand M may be highly related to preference for brand M][ add first the 2-4 interaction Brand M is recommended for hot water add 2 nd the 3-4 interaction brand M is also recommended for Soft laundry add 3 rd the 1-3 interaction Add finally some possible 3- factor interactions

9
Models d.f.G2G2 [1][3][24]1722.4 [1][24][34]1618 [13][24][34]1411.9 [13][23][24][34]1311.2 [12][13][23][24][34]1110.1 [1][234]1414.5 [134][24]1012.2 [13][234]128.4 [24][34][123]98.4 [123][234]85.6 Likelihood Ratio G 2 for various models

11
Stepwise selection procedures Forward Selection Backward Elimination

12
Forward Selection: Starting with a model that under fits the data, log-linear parameters that are not in the model are added step by step until a model that does fit is achieved. At each step the log-linear parameter that is most significant is added to the model: To determine the significance of a parameter added we use the statistic: G 2 (2|1) = G 2 (2) – G 2 (1) Model 1 contains the parameter. Model 2 does not contain the parameter

13
Backward Selection: Starting with a model that over fits the data, log-linear parameters that are in the model are deleted step by step until a model that continues to fit the model and has the smallest number of significant parameters is achieved. At each step the log-linear parameter that is least significant is deleted from the model: To determine the significance of a parameter deleted we use the statistic: G 2 (2|1) = G 2 (2) – G 2 (1) Model 1 contains the parameter. Model 2 does not contain the parameter

15
K = knowledge N = Newspaper R = Radio S = Reading L = Lectures

18
Continuing after 10 steps

19
The final step

20
The best model was found a the previous step [LN][KLS][KR][KN][LR][NR][NS]

21
Logit Models To date we have not worried whether any of the variables were dependent of independent variables. The logit model is used when we have a single binary dependent variable.

23
The variables 1.Type of seedling (T) a.Longleaf seedling b.Slash seedling 2.Depth of planting (D) a.Too low. b.Too high 3.Mortality (M) (the dependent variable) a.Dead b.Alive

24
The Log-linear Model Note: ij1 = # dead when T = i and D = j. ij2 = # alive when T = i and D = j. = mortality ratio when T = i and D = j.

25
Hence since

26
The logit model: where

27
Thus corresponding to a loglinear model there is logit model predicting log ratio of expected frequencies of the two categories of the independent variable. Also k +1 factor interactions with the dependent variable in the loglinear model determine k factor interactions in the logit model k + 1 = 1 constant term in logit model k + 1 = 2, main effects in logit model

28
1 = Depth, 2 = Mort, 3 = Type

29
Log-Linear parameters for Model: [TM][TD][DM]

30
Logit Model for predicting the Mortality or

32
The best model was found by forward selection was [LN][KLS][KR][KN][LR][NR][NS] To fit a logit model to predict K (Knowledge) we need to fit a loglinear model with important interactions with K (knowledge), namely [LNRS][KLS][KR][KN] The logit model will contain Main effects for L (Lectures), N (Newspapers), R (Radio), and S (Reading) Two factor interaction effect for L and S

33
The Logit Parameters for the Model : LNSR, KLS, KR, KN ( Multiplicative effects are given in brackets, Logit Parameters = 2 Loglinear parameters) The Constant term: -0.226 (0.798) The Main effects on Knowledge: LecturesLect0.268 (1.307) None-0.268 (0.765) NewspaperNews0.324 (1.383) None-0.324 (0.723) ReadingSolid0.340 (1.405) Not-0.340 (0.712) RadioRadio0.150 (1.162) None-0.150 (0.861) The Two-factor interaction Effect of Reading and Lectures on Knowledge

Similar presentations

OK

Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.

Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Download ppt on recruitment and selection process of infosys Ppt on line drawing algorithm in computer A ppt on golden ratio Ppt on our environment Ppt on resistance temperature detectors Ppt on red cross day Ppt on business etiquettes training dogs Ppt on brand marketing company A ppt on cloud computing Ppt on next generation 2-stroke engine pistons