Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mixture modelling of continuous variables. Mixture modelling So far we have dealt with mixture modelling for a selection of binary or ordinal variables.

Similar presentations


Presentation on theme: "Mixture modelling of continuous variables. Mixture modelling So far we have dealt with mixture modelling for a selection of binary or ordinal variables."— Presentation transcript:

1 Mixture modelling of continuous variables

2 Mixture modelling So far we have dealt with mixture modelling for a selection of binary or ordinal variables collected at a single time point (c/s) or longitudinally across time The simplest example of a mixture model consists of a single continuous manifest variable The multivariate extension to this simple model is known as Latent Profile Analysis

3 Single continuous variable An underlying latent grouping might present itself as a multi-modal distribution for the continuous variable Height

4 Single continuous variable An underlying latent grouping might present itself as a multi-modal distribution for the continuous variable Height Females

5 Single continuous variable An underlying latent grouping might present itself as a multi-modal distribution for the continuous variable Height Males

6 Single continuous variable But the distance between modes may be small or even non-existent Depends on the variation in the item being measured and also the sample in which the measurement is taken (e.g. clinical or general population)

7 Single continuous variable Figure taken from: Muthén, B. (2001). Latent variable mixture modeling. In G. A. Marcoulides & R. E. Schumacker (eds.), New Developments and Techniques in Structural Equation Modeling (pp. 1-33). Lawrence Erlbaum Associates.

8 Single continuous variable Figure taken from: Muthén, B. (2001). Latent variable mixture modeling. In G. A. Marcoulides & R. E. Schumacker (eds.), New Developments and Techniques in Structural Equation Modeling (pp. 1-33). Lawrence Erlbaum Associates.

9 Single continuous variable We assume that the manifest variable is normally distributed within each latent class

10 GHQ Example Data: File is "ego_ghq12_id.dta.dat" ; Define: sumodd = ghq01 +ghq03 +ghq05 +ghq07 +ghq09 +ghq11; sumeven = ghq02 +ghq04 +ghq06 +ghq08 +ghq10 +ghq12; ghq_sum = sumodd + sumeven; Variable: Names are ghq01 ghq02 ghq03 ghq04 ghq05 ghq06 ghq07 ghq08 ghq09 ghq10 ghq11 ghq12 f1 id; Missing are all (-9999) ; usevariables = ghq_sum; Analysis: Type = basic ; plot: type is plot3; Here we derive a single sum-score from the 12 ordinal GHQ items The syntax shows that variables can be created in the define statement which are not then used in the final model

11 Examine the distribution of the scale Scale appears unimodal, although there is a long upper-tail

12 Examine the distribution of the scale By changing from the default number of bins we see secondary modes appearing

13 Fit a 2-class mixture Variable: classes = c(2); Analysis: type = mixture ; proc = 2 (starts); starts = ; stiterations = 20; stscale = 15; model: %overall% %c#1% [ghq_sum]; ghq_sum (equal_var); %c#2% [ghq_sum]; ghq_sum (equal_var);

14 Fit a 2-class mixture Variable: classes = c(2); Analysis: type = mixture ; proc = 2 (starts); starts = ; stiterations = 20; stscale = 15; model: %overall% %c#1% [ghq_sum]; ghq_sum (equal_var); %c#2% [ghq_sum]; ghq_sum (equal_var); This funny set of symbols refers to the first class Means are referred to using square brackets. Variances are bracket-less. Here we have constrained the variances to be equal between classes by having the same bit of text in brackets at the end of the two variance lines Means will be freely estimated.

15 Model results TESTS OF MODEL FIT Loglikelihood H0 Value H0 Scaling Correction Factor for MLR Information Criteria Number of Free Parameters 4 Akaike (AIC) Bayesian (BIC) Sample-Size Adjusted BIC (n* = (n + 2) / 24)

16 Model results FINAL CLASS COUNTS AND PROPORTIONS FOR THE LATENT CLASSES BASED ON THE ESTIMATED MODEL Latent classes CLASSIFICATION QUALITY Entropy CLASSIFICATION OF INDIVIDUALS BASED ON THEIR MOST LIKELY LATENT CLASS MEMBERSHIP Latent classes Average Latent Class Probabilities for Most Likely Latent Class Membership (Row) by Latent Class (Column) Entropy is high A smaller class of 18% has emerged, consistent with the expected behaviour of the GHQ in this sample from primary care Class-specific entropies are both good

17 Model results Two-Tailed Estimate S.E. Est./S.E. P-Value Latent Class 1 Means GHQ_SUM Variances GHQ_SUM Latent Class 2 Means GHQ_SUM Variances GHQ_SUM Categorical Latent Variables Means C# Huge separation in means since SD = 4.3 (i.e. sqrt(18.88))

18 Examine within-class distributions

19

20 What have we done? We have effectively done a t-test backwards. Rather than obtaining a manifest binary variable, assuming equality of variances and testing for equality of means we have derived a latent binary variable based on the assumption of a difference in means (still with equal variance)

21 What next? The bulk of the sample now falls into a class with a GHQ distribution which is more symmetric than the sample as a whole There appear to be additional modes within the smaller class The ‘optimal’ number of classes can be assessed in the usual way using aBIC, entropy and the bootstrap LRT. In the univariate case, residual correlations are not an issue, but when moving to a multivariate example, these too will need to be assessed.

22 What next? As before, posterior probabilities can be exported and modelled in a weighted regression analysis A logistic regression analysis using a latent binary variable derived from the data is likely to be far more informative than a linear-regression analysis using the manifest continuous variable

23 What if we had not constrained the variances? Variable: classes = c(2); Analysis: type = mixture ; proc = 2 (starts); starts = ; stiterations = 20; stscale = 15; model: %overall% %c#1% [ghq_sum]; ghq_sum ; ! (equal_var); %c#2% [ghq_sum]; ghq_sum ; ! (equal_var); Commented out

24 TESTS OF MODEL FIT Loglikelihood H0 Value H0 Scaling Correction Factor for MLR Information Criteria Number of Free Parameters 5 Akaike (AIC) Bayesian (BIC) Sample-Size Adjusted BIC FINAL CLASS COUNTS AND PROPORTIONS FOR THE LATENT CLASSES BASED ON THE ESTIMATED MODEL Latent Classes CLASSIFICATION QUALITY Entropy Entropy is poor Classes are more equal BIC is lower!

25 Means are closer, variances differ greatly MODEL RESULTS Two-Tailed Estimate S.E. Est./S.E. P-Value Latent Class 1 Means GHQ_SUM Variances GHQ_SUM Latent Class 2 Means GHQ_SUM Variances GHQ_SUM

26 Distribution far from symmetric

27


Download ppt "Mixture modelling of continuous variables. Mixture modelling So far we have dealt with mixture modelling for a selection of binary or ordinal variables."

Similar presentations


Ads by Google