Presentation is loading. Please wait.

Presentation is loading. Please wait.

Liability Threshold Models

Similar presentations


Presentation on theme: "Liability Threshold Models"— Presentation transcript:

1 Liability Threshold Models
Frühling Rijsdijk & Kate Morley Twin Workshop, Boulder Tuesday March 4th 2008

2 Aims Introduce model fitting to categorical data Selection
Define liability and describe assumptions of the liability model Show how heritability of liability can be estimated from categorical twin data Selection Practical exercise

3 Ordinal data Measuring instrument discriminates between two or a few ordered categories Absence (0) or presence (1) of a disorder Score on a Q item e.g. : 0 - 1, 0 - 4 Yesterday we’ve learned how to analyse continuous variables. We now deal with the situation where, instead of making measurements on a continuous scale, we are able to discriminate between only a small number of ordered categories with our measuring instrument. For example, the data may be the presence or absence of a disease, or the responses to a single item on a questionnaire. In such cases the data take the form of counts, i.e. the number of individuals within each category of response.

4 A single ordinal variable
Assumptions: (1) Underlying normal distribution of liability (2) The liability distribution has 1 or more thresholds (cut-offs) So if our trait of interest is categorical, it is useful to make some assumptions to help us analyse it. Firstly, that the ordered categories reflect an imprecise measurement of an underlying normal distribution of liability. Secondly, the liability distribution is assumed to have one or more thresholds (cut-offs) to discriminate between the categories.

5 The standard Normal distribution
Liability is a latent variable, the scale is arbitrary, distribution is, therefore, assumed to be the Standard Normal Distribution (SND) or z-distribution: mean () = 0 and SD () = 1 z-values are the number of SD away from the mean area under curve translates directly to probabilities > Stand Normal Probability Density function () Since the liability is theoretical construct, a latent (unmeasured) variable, its scale is arbitrary, and it is customary to assume that the liability has a standard normal distribution (SND) or z-distribution which has a Mean of 0, and a Standard Deviation (SD) of 1. The z-distribution is described mathematically by the standard normal probability density function ( = phi), which is a bell-shaped curve. The z-values are the number of SDs away from the mean; The z-distribution is used because of its convenience: we can interpret the area under a curve between two z-values as a probability: e.g. the area between 1 and +1 SD around the mean corresponds to 68%; between 2 and +2 with 95%. -3 3 - 1 2 -2 68%

6 Standard Normal Cumulative Probability in right-hand tail
(For negative z values, areas are found by symmetry) Z0 Area % % % % % % % % % % % % % % % % -3 3 zT Area=P(z  zT) Mathematically, the area under a curve can be worked out using integral calculus. This is the mathematical notation for integration. First: the integral symbol with the thresholds enclosing the area we are after (so to work out from Zt to infinity, i.e. all this shaded bit to the right of zt); Secondly, the function of the liability distribution, the PDF (probability density function), indicated by the phi symbol. Between brackets we indicate some distributional characteristics , mean=0, variance=1. Because we’re thinking of the area under the curve as probabilities, the total area under the curve will equal 1, and the probability of being to the right of the mean (zero) will be 0.5 (point to chart – the z-score of the threshold is zero, the area to the right of it is half of the total area under the curve, so the probability of being to the right of zero is 0.5) Given any threshold (z-value), it’s possible to determine the probability between the threshold and infinity, or the threshold and minus infinity (which is the mirror image of threshold to infinity, so calculated by working out that and taking it away from 1). However, we don’t need to worry about integration of the SND, it has already been done by statisticians and we can look up these values in standard statistical tables. Here is a table with areas for a couple of z-values. Values for negative z-values are found by symmetry.

7 Example: single dichotomous variable
It is possible to find a z-value (threshold) so that the proportion exactly matches the observed proportion of the sample e.g. sample of 1000 individuals, where 120 have met the criteria for a disorder (12%): the z-value is 1.2 Z0 Area % % % % % % % % % % % % % We can do the opposite finding a z-value (threshold) on the SND, such that the proportion above the threshold exactly matches the observed proportion of the sample. Thus, if from a sample of 1000 individuals, 120 subjects meet the criteria for a specific disorder (12%), the threshold z-value (cut-off) is found to be 1.2. In other words, the threshold value is estimated to be 1.2, from the observed counts of 120 affected individuals and 880 unaffected individuals. SO we can work backwards to calculate a threshold of liability associated with the disorder, just by using the proportions of affected versus unaffected people in our sample. -3 1.2 3 Unaffected (0) Affected (1) Counts: 880 120

8 Two ordinal variables: Data from twin pairs
> Contingency Table with 4 observed cells: cell a: pairs concordant for unaffected cell d: pairs concordant for affected cell b/c: pairs discordant for the disorder Twin1 Twin2 1 a b c d What happens when we have Categorical Data for Twins ? When the measured trait is dichotomous i.e. a disorder is either present or not, we can partition our observations into pairs concordant for not having the disorder (a), pairs concordant for the disorder (d) and discordant pairs in which one is affected and one is unaffected (b and c). These frequencies are summarized in a 2x2 contingency table: 0 = unaffected 1 = affected

9 Joint Liability Model for twin pairs
Assumed to follow a bivariate normal distribution, where both traits have a mean of 0 and standard deviation of 1, but the correlation between them is unknown. The shape of a bivariate normal distribution is determined by the correlation between the traits When we have twin pair data, the assumption now is that the joint distribution of liabilities of twin pairs follows a bivariate normal distribution, where both traits have a mean of 0 and standard deviation 1, but the correlation between them is unknown. The shape of such a bivariate normal distribution is determined by the correlation.

10 Bivariate Normal These graphs represent bivariate normal distributions of z-scores of pairs of twins (one twin on each of these two axes) The left hand graph shows no correlation. There is a ‘hill’ shape because most people have middling scores. When the correlation is high, a spike is seen because one twin scoring high means the co-twin is also likely to score high. r =.90 r =.00

11 Bivariate Normal (R=0. 6) partitioned at threshold 1
Bivariate Normal (R=0.6) partitioned at threshold 1.4 (z-value) on both liabilities Top right: cell ‘a’ where both twins are free from the disorder (both score below the threshold z-score) Bottom-left: discordant twins – one scores above the thr, the other below it (cells b and c) Bottom-right: concordant for having the disorder (cell d)

12 ò ) , ; ( dL L F Σ Expected proportions
By numerical integration of the bivariate normal over two dimensions: the liabilities for twin1 and twin2 e.g. the probability that both twins are affected : 2 1 ) , ; ( dL L T ò F Σ Like the SND the expected proportion under the curve between any 2 z-valuesranges of values of the two traits can be calculated by means of numerical integration. Φ is the bivariate normal probability density function, L1 and L2 are the liabilities of twin1 and twin2, with means 0, and  is the correlation matrix of the two liabilities T1 is threshold (z-value) on L1, T2 is threshold (z-value) on L2

13 Expected Proportions of the BN, for R = 0.6, T1 = 1.4, T2 = 1.4
Liab 2 1 Liab 1 .87 .05 .05 .03 1 Since the BN distribution is a known mathematical distribution, for each correlation (R) and any two thresholds on the liabilities we know what the expected proportions are in each cell. This slide shows the example of a BN with a correlation of .60 and both TH having a z-value of 1.4. The point of this is that from simple cell counts or cell proportions in our data, using this standard distribution as an assumption, we can derive correlations between two liabilities (see next slide).  There are programmed mathematical subroutines that can do these calculations. Mx uses one of them.

14 How can we estimate correlations from CT?
The correlation (shape) of the BN and the two thresholds determine the relative proportions of observations in the 4 cells of the CT. Conversely, the sample proportions in the 4 cells can be used to estimate the correlation and the thresholds. a c b d Twin2 Twin1 1 a b c d This Figure shows the correlated dimensions for twin pair data with the liabilities of twin1 and twin2 on the axes. The correlation (contour) and the two thresholds (cut-offs on the liability distribution) determine the relative proportions (a, b, c and d) of observations in the 4 cells of the CT. Conversely, the sample proportions in the 4 cells can be used to estimate the correlation and the thresholds. c d a b

15 The classical Twin Model
Estimate correlation in liability separately for MZ and DZ A variance decomposition (A, C, E) can be applied to liabilities estimate of the heritability of the liability When data on MZ and DZ twin pairs are available, then of course we can estimate a correlation in liability for each type of twins. However, we can also go further by fitting a model for the liability that would explain these MZ and DZ correlations. We can decompose the liability correlation into A, C and E, as we do for continuous traits, where correlations in liability are determined by path model. This leads to an estimate of the heritability of the liability.

16 ACE Liability Model Variance constraint Threshold model 1 1/.5 E C A A
This is not a proper path diagram, in that the path tracing rules do not apply for the arrows below the latent liability circles. It shows the underlying assumptions of liability, thresholds etc. And shows that what we start from is just counts for pairs of twins. Threshold model Unaf Aff Unaf Aff Twin 1 Twin 2

17 Summary To estimate correlations for categorical traits (counts) we make assumptions about the joint distribution of the data (Bivariate Normal) The relative proportions of observations in the cells of the Contingency Table are translated into proportions under the BN The most likely thresholds and correlations are estimated Genetic/Environmental variance components are estimated based on these correlations derived from MZ and DZ data So that’s the theory behind how ordinal analysis works, Questions?? now I’ll describe some tips on how to do it for different types of data.

18 How can we fit ordinal data in Mx?
1. Summary statistics: CT Mx has a built-in fit function for the maximum likelihood analysis of 2-way Contingency Tables > analyses limited to only two variables, with 2 or more ordered classes CTable 2 2 CT 3 3 CT 3 2 Tetrachoric Polychoric Mx input lines

19 Fit function (Equations given in Mx manual, pg 91-92)
Mx calculates twice the log-likelihood of the observed frequency data under the model using: - Observed frequency in each cell Expected proportion in each cell (Num Integration of the BN) Mx calculates the log-likelihood of the observed frequency data themselves An approximate 2 statistic is then computed by taking the differences in these 2 likelihoods (Equations given in Mx manual, pg 91-92) Mx works out the proportions in each cell, works out the most likely shape of the distribution, that is, the correlation and thresholds

20 This problem is resolved if the CT is at least
Test of assumption For a 2x2 CT with 1 estimated TH on each liability, the 2 statistic is always 0, 3 observed statistics, 3 param, df=0 (it is always possible to find a correlation and 2 TH to perfectly explain the proportions in each cell). No goodness of fit of the normal distribution assumption. This problem is resolved if the CT is at least 2x3 (i.e. more than 2 categories on at least one liability) A significant 2 reflects departure from normality. Once it has estimated the correlation and thresholds, Mx works out the expected proportions in each cell. These will be slightly different to the observed statistics. 1 O1 O2 O3 O4 1 2 O1 O2 O3 O4 O5 O6

21 How can we fit ordinal data in Mx?
2. Raw data analyses Advantages over CT: - multivariate - handles missing data - moderator variables (for covariates e.g. age) ORD File=...dat Analogous to continuous data, the maximum likelihood raw data approach for ordinal variables provides a natural method of handling missing data. In theory if data are MCAR or MAR this procedure will provide unbiased ML estimates if the assumptions of the MV normal hold. The raw data approach also provides a natural way to exploit moderator variables, using definition variablels. Mx input lines

22 Fit function (Equations given in Mx manual, pg 89-90)
The likelihood for a vector of observed ordinal responses is computed by the expected proportion in the corresponding cell of the MV distribution The likelihood of the model is the sum of the likelihoods of all vectors of observation This is a value that depends on the number of observations and isn’t very interpretable (as with continuous data) So we compare it with the LL of other models, or a saturated (correlation) model to get a 2 model-fit index (Equations given in Mx manual, pg 89-90)

23 Raw Ordinal Data File ordinal ordinal Zyg respons1 respons2 1 0 0
NOTE: smallest category should always be 0 !!

24 SORT ! Sorting speeds up computation time
If i = i+1 then likelihood not recalculated In e.g. bivariate, 2 category case, there are only 4 possible vectors of observations: 1 1, 0 1, 1 0, 00 and, therefore, only 4 integrals for Mx to calculate if the data file is sorted. Its very useful to sort the data into groupings that correspond to cells in the CT, e.g. all the concordant unaffected twins first, then where twin 1 affected and twin 2 unaffected etc. This is because if a row of data is the same as the row above, Mx does not have to calculate the likelihood again.

25 Selection

26 Selection For rare disorders (e.g. Schizophrenia), selecting a random population sample of twins will lead to the vast majority of pairs being unaffected. A more efficient design is to ascertain twin pairs through a register of affected individuals.

27 Types of ascertainment
Double (complete) Ascertainment Single Ascertainment Double Ascertainment: This refers to the case where all affected twins in a community sample are registered and selected as probands. In this case twin pairs where both members are affected will be “doubly ascertained” because both members of such twin pairs will be probands (each having the other as an affected cotwin). No concordant unaffected pairs (cell a of the CT) are observed. Single Ascertainment: If the register is limited the chance of it containing both members of a twin pair is close to zero. Thus, all twin pairs will have just one proband. The first column of the contingency table (cell a and c) are not observed since unaffected individuals cannot be probands. Between Complete and Single Ascertainment there is Multiple Incomplete Ascertainment: Depending on the size of the register, the probability of ascertaining more than one probands in the same family increases and a proportion of twin-pairs will be ascertained more than once. It is possible to express the extent of incomplete ascertainment by means of an ascertainment parameter.

28 Ascertainment Correction
When using ascertained samples, the Likelihood Function needs to be corrected. Omission of certain classes from observation leads to an increase of the likelihood of observing the remaining individuals Need re-normalization Mx corrects for incomplete ascertainment by dividing the likelihood by the proportion of the population remaining after ascertainment It is possible to analyse CT from ascertained data in Mx as we have before by simply substituting a –1 for the cells that are not observed. Since the likelihood of observing the remaining individuals increases after omission of certain classes from observation we need a correction for the fit function. Mx corrects for incomplete ascertainment by dividing the likelihood by the proportion of the population remaining after ascertainment.

29 Ascertainment Correction in Mx: univariate
Complete Ascertainment Single Ascertainment CTable 2 2 -1 b c d CTable 2 2 -1 b -1 d CT from ascertained data can be analysed in Mx by simply substituting a –1 for the missing cells - Thresholds need to be fixed > population prevalence of disorder e.g. Schiz (1%), z-value = 2.33

30 Ascertainment Correction in Mx: multivariate
Write own fit function. Adjustment of the Likelihood function is accomplished by specifying a user-defined fit function that adjusts for the missing cells (proportions) of the distribution. A twin study of genetic relationships between psychotic symptoms. Cardno, Rijsdijk, Sham, Murray, McGuffin, Am J Psychiatry. 2002, 159(4):539-45

31 Practical

32 Sample and Measures Australian Twin Registry data (QIMR)
Self-report questionnaire Non-smoker, ex-smoker, current smoker Age of smoking onset Large sample of adult twins + family members Today using MZMs (785 pairs) and DZMs (536 pairs)

33 Variable: age at smoking onset, including non-smokers Ordered as:
Non-smokers / late onset / early onset

34 Practical Exercise Analysis of age of onset data - Estimate thresholds
- Estimate correlations - Fit univariate model Observed counts from ATR data: MZM DZM Twin 1 Twin 1 Twin 2 Twin 2

35 Mx Threshold Specification: Binary
Threshold matrix: T Full 1 2 Free Twin Twin 2 -1 -3 3 threshold for twin 1 threshold for twin 2 Mx Threshold Model: Thresholds T /

36 Mx Threshold Specification: 3+ Cat.
Threshold matrix: T Full 2 2 Free Twin Twin 2 2.2 -1 -3 1.2 3 1st threshold increment

37 Mx Threshold Specification: 3+ Cat.
Threshold matrix: T Full 2 2 Free Twin Twin 2 2.2 -1 -3 1.2 3 1st threshold increment Mx Threshold Model: Thresholds L*T /

38 Mx Threshold Specification: 3+ Cat.
Threshold matrix: T Full 2 2 Free Twin Twin 2 2.2 -1 -3 1.2 3 1st threshold increment Mx Threshold Model: Thresholds L*T / 2nd threshold

39 polycor_smk.mx #define nvarx2 2 #define nthresh 2 #ngroups 2
G1: Data and model for MZM correlation DAta NInput_vars=3 Missing=. Ordinal File=smk_prac.ord Labels zyg ageon_t1 ageon_t2 SELECT IF zyg = 2 SELECT ageon_t1 ageon_t2 / Begin Matrices; R STAN nvarx2 nvarx2 FREE T FULL nthresh nvarx2 FREE L Lower nthresh nthresh End matrices; Value 1 L 1 1 to L nthresh nthresh

40 polycor_smk.mx #define nvarx ! Number of variables x number of twins #define nthresh 2 ! Number of thresholds=num of cat-1 #ngroups 2 G1: Data and model for MZM correlation DAta NInput_vars=3 Missing=. Ordinal File=smk_prac.ord ! Ordinal data file Labels zyg ageon_t1 ageon_t2 SELECT IF zyg = 2 SELECT ageon_t1 ageon_t2 / Begin Matrices; R STAN nvarx2 nvarx2 FREE T FULL nthresh nvarx2 FREE L Lower nthresh nthresh End matrices; Value 1 L 1 1 to L nthresh nthresh

41 polycor_smk.mx #define nvarx2 2 ! Number of variables per pair
#define nthresh 2 ! Number of thresholds=num of cat-1 #ngroups 2 G1: Data and model for MZM correlation DAta NInput_vars=3 Missing=. Ordinal File=smk_prac.ord ! Ordinal data file Labels zyg ageon_t1 ageon_t2 SELECT IF zyg = 2 SELECT ageon_t1 ageon_t2 / Begin Matrices; R STAN nvarx2 nvarx2 FREE ! Correlation matrix T FULL nthresh nvarx2 FREE L Lower nthresh nthresh End matrices; Value 1 L 1 1 to L nthresh nthresh

42 polycor_smk.mx #define nvarx2 2 ! Number of variables per pair
#define nthresh 2 ! Number of thresholds=num of cat-1 #ngroups 2 G1: Data and model for MZM correlation DAta NInput_vars=3 Missing=. Ordinal File=smk_prac.ord ! Ordinal data file Labels zyg ageon_t1 ageon_t2 SELECT IF zyg = 2 SELECT ageon_t1 ageon_t2 / Begin Matrices; R STAN nvarx2 nvarx2 FREE ! Correlation matrix T FULL nthresh nvarx2 FREE ! thresh tw1, thresh tw2 L Lower nthresh nthresh ! Sums threshold increments End matrices; Value 1 L 1 1 to L nthresh nthresh ! initialize L

43 COV R / Thresholds L*T / Bound T 1 1 T 1 2 Bound T 2 1 T 2 2 Start 0.2 T 1 1 T 1 2 Start 0.2 T 2 1 T 2 2 Start .6 R 2 1 Option RS Option func=1.E-10 END

44 COV R / ! Predicted Correlation matrix for MZ pairs
Thresholds L*T / ! Threshold model, to ensure t1>t2>t3 etc. Bound T 1 1 T 1 2 Bound T 2 1 T 2 2 Start 0.2 T 1 1 T 1 2 Start 0.2 T 2 1 T 2 2 Start .6 R 2 1 Option RS Option func=1.E-10 END

45 COV R / ! Predicted Correlation matrix for MZ pairs
Thresholds L*T / ! Threshold model, to ensure t1>t2>t3 etc. Bound T 1 1 T 1 2 Bound T 2 1 T 2 2 !Ensures +ve threshold increment Start 0.2 T 1 1 T 1 2 !Starting value for 1st thresholds Start 0.2 T 2 1 T 2 2 !Starting value for increment Start .6 R !Starting value for correlation Option RS Option func=1.E-10 !Function precision less than usual END

46 ! Test equality of thresholds between Twin 1 and Twin 2
EQ T T !constrain TH1 equal across Tw1 and Tw2 MZM EQ T T !constrain TH2 equal across Tw1 and Tw2 MZM EQ T T !constrain TH1 equal across Tw1 and Tw2 DZM EQ T T !constrain TH2 equal across Tw1 and Tw2 DZM End Get cor.mxs ! Test equality of thresholds between MZM & DZM EQ T T T T !TH1 equal across all males EQ T T T T !TH2 equal across all males

47 Estimates: Saturated Model
-2LL df Twin 1 Twin 2 Saturated MZ DZ 3055 Threshold 1 0.09 0.12 0.03 0.05 Threshold 2 0.31 0.33 0.24 0.26 Correlation 0.81 0.55 MATRIX R: MATRIX T: Matrix of EXPECTED thresholds AGEON_T1AGEON_T2 Threshold Threshold

48 Estimates: Saturated Model
-2LL df Twin 1 Twin 2 Saturated MZ DZ 3055 Threshold 1 0.09 0.12 0.03 0.05 Threshold 2 0.31 0.33 0.24 0.26 Correlation 0.81 0.55 MATRIX R: MATRIX T: Matrix of EXPECTED thresholds AGEON_T1AGEON_T2 Threshold Threshold

49 Exercise I Fit saturated model Test equality of thresholds
Estimates of thresholds Estimates of polychoric correlations Test equality of thresholds Examine differences in threshold and correlation estimates for saturated model and sub-models Examine correlations What model should we fit? Raw ORD File: smk_prac.ord Script: polychor_smk.mx Location: Faculty\Fruhling\Categorical_Data

50 Estimates: Sub-models
Х2 df P Twin 1 Twin 2 Sub-model MZ DZ Threshold 1 Threshold 2 Correlation Sub-model MZ DZ Raw ORD File: smk_prac.ord Script: polychor_smk.mx Location: Faculty\Fruhling\Categorical_Data

51 Estimates: Sub-models
Х2 df P Twin 1 Twin 2 Sub-model MZ DZ 0.77 4 0.94 Threshold 1 0.10 0.04 Threshold 2 0.32 0.25 Correlation 0.81 0.55 Sub-model MZ DZ 2.44 6 0.88 0.07 0.29

52 ACEcat_smk.mx #define nvar 1 !number of variables per twin
#define nvarx !number of variables per pair #define nthresh 2 !number of thresholds=num of cat-1 #ngroups 4 !number of groups in script G1: Parameters for the Genetic model Calculation Begin Matrices; X Low nvar nvar FREE !Additive genetic path coefficient Y Low nvar nvar FREE !Common environmental path coefficient Z Low nvar nvar FREE !Unique environmental path coefficient End matrices; Begin Algebra; A=X*X' ; !Additive genetic variance (path X squared) C=Y*Y' ; !Common Environm variance (path Y squared) E=Z*Z' ; !Unique Environm variance (path Z squared) End Algebra; start .6 X 1 1 Y 1 1 Z 1 1 !starting value for X, Y, Z A 1 1 C 1 1 E 1 1!requests the 95%CI for h2, c2, e2 End

53 G2: Data and model for MZ pairs
DAta NInput_vars=3 Missing=. Ordinal File=prac_smk.ord Labels zyg ageon_t1 ageon_t2 SELECT IF zyg = 2 SELECT ageon_t1 ageon_t2 / Matrices = group 1 T FULL nthresh nvarx2 FREE !Thresholds for twin 1 and twin 2 L Lower nthresh nthresh COV !Predicted covariance matrix for MZs ( A + C + E | A + C _ A + C | A + C + E ) / Thresholds L*T / !Threshold model Bound T 1 1 T 1 2 !Ensures +ve threshold increment Bound T 2 1 T 2 2 Start 0.1 T 1 1 T 1 2 !Starting values for the 1st thresholds Start 0.2 T 1 1 T 1 2 !Starting values for increment Option rs End

54 G3: Data and model for DZ pairs
DAta NInput_vars=4 Missing=. Ordinal File=prac_smk.ord Labels zyg ageon_t1 ageon_t2 SELECT IF zyg = 4 SELECT ageon_t1 ageon_t2 / Matrices = group 1 T FULL nthresh nvarx2 FREE !Thresholds for twin 1 and twin 2 L Lower nthresh nthresh H FULL !0.5 COVARIANCE !Predicted covariance matrix for DZs ( A + C + E | + C _ + C | A + C + E ) / Thresholds L*T / !Threshold model Bound T 1 1 T 1 2 !Ensures +ve threshold increment Bound T 2 1 T 2 2 Start 0.1 T 1 1 T 1 2 !Starting values for the 1st thresholds Start 0.2 T 1 1 T 1 2 !Starting values for increment Option rs End

55 Constraint groups and degrees of freedom
G4: CONSTRAIN VARIANCES OF OBSERVED VARIABLES TO 1 CONSTRAINT Matrices = Group 1 I UNIT 1 1 CO A+C+E= I / !constrains the total variance to equal 1 Option func=1.E-10 End Constraint groups and degrees of freedom As the total variance is constrained to unity, we can estimate one VC from the other two, giving us one less independent parameter: A + C + E = 1 therefore E = 1 - A - C So each constraint group adds a degree of freedom to the model.

56 Exercise II Fit ACE model Raw ORD File: smk_prac.ord
What does the threshold model look like? Change it to reflect the findings from exercise I What are the VC estimates? Raw ORD File: smk_prac.ord Script: ACEcat_smk.mx Location: Faculty\Fruhling\Categorical_Data

57 Results Model -2LL df Х2 P Saturated 5128.185 3055 ACE 5130.628 3061
2.443 6 0.88 3 Confidence intervals requested in group 1 Matrix Element Int Estimate Lower Upper Lfail Ufail A C E


Download ppt "Liability Threshold Models"

Similar presentations


Ads by Google