Presentation is loading. Please wait.

Presentation is loading. Please wait.

Applying Finite Mixture Models Presenter: Geoff McLachlan.

Similar presentations


Presentation on theme: "Applying Finite Mixture Models Presenter: Geoff McLachlan."— Presentation transcript:

1 Applying Finite Mixture Models Presenter: Geoff McLachlan

2

3 Topics  Introduction  Application of EM algorithm  Examples of normal mixtures  Robust mixture modeling  Number of components in a mixture model  Number of nonnormal components  Mixture models for failure-time data  Mixture software

4 1.1 Flexible Method of Modeling  Astronomy  Biology  Genetics  Medicine  Psychiatry  Economics  Engineering  Marketing

5 1.2 Initial Approach to Mixture Analysis  Classic paper of Pearson (1894)

6 Figure 1: Plot of forehead to body length data on 1000 crabs and of the fitted one- component (dashed line) and two-component (solid line) normal mixture models.

7 1.3 Basic Definition We let Y 1,…. Y n denote a random sample of size n where Y j is a p-dimensional random vector with probability density function f (y j ) where the f i (y j ) are densities and the  i are nonnegative quantities that sum to one. (1)

8 1.4 Interpretation of Mixture Models An obvious way of generating a random vector Y j with the g-component mixture density f (Y j ), given by (1), is as follows. Let Z j be a categorical random variable taking on the values 1, …,g with probabilities  1, …  g, respectively, and suppose that the conditional density of Y j given Z j =i is f i (y j ) (i=1, …, g). Then the unconditional density of Y j, (that is, its marginal density) is given by f (y j ).

9 1.5 Shapes of Some Univariate Normal Mixtures Consider where denotes the univariate normal density with mean  and variance  2. (5) (6)

10 Figure 2: Plot of a mixture density of two univariate normal components in equal proportions with common variance  2 =1  =1  =2  =3  =4

11 Figure 3: Plot of a mixture density of two univariate normal components in proportions 0.75 and 0.25 with common variance  =1  =2  =3  =4

12

13

14 1.6 Parametric Formulation of Mixture Model In many applications, the component densities f i (y j ) are specified to belong to some parametric family. In this case, the component densities f i (y j ) are specified as f i (y j ;  i ), where  i is the vector of unknown parameters in the postulated form for the ith component density in the mixture. The mixture density f (y j ) can then be written as

15 1.6 cont. where the vector Y containing all the parameters in the mixture model can be written as where  is the vector containing all the parameters in  1,…  g known a priori to be distinct. (7) (8)

16 1.7 Identifiability of Mixture Distributions In general, a parametric family of densities f (y j ;  is identifiable if distinct values of the parameter  determine distinct members of the family of densities where  is the specified parameter space; that is, (11)

17 1.7 cont. if and only if identifiability for mixture distributions is defined slightly different. To see why this is necessary, suppose that f (y j ;  ) has two component densities, say, f i (y;  i ) and f h (y;  h ), that belong to the same parametric family. Then (11) will still hold when the component labels i and h are interchanged in . (12)

18 1.8 Estimation of Mixture Distributions  In the 1960s, the fitting of finite mixture models by maximum likelihood had been studied in a number of papers, including the seminal papers by Day (1969) and Wolfe (1965, 1967, 1970).  However, it was the publication of the seminal paper of Dempster, Laird, and Rubin (1977) on the EM algorithm that greatly stimulated interest in the use of finite mixture distributions to model heterogeneous data.

19 1.8 Cont. This is because the fitting of mixture models by maximum likelihood is a classic example of a problem that is simplified considerably by the EM's conceptual unification of maximum likelihood (ML) estimation from data that can be viewed as being incomplete.

20 1.9 Mixture Likelihood Approach to Clustering Suppose that the purpose of fitting the finite mixture model (7) is to cluster an observed random sample y 1,…,y n into g components. This problem can be viewed as wishing to infer the associated component labels z 1,…,z n of these feature data vectors. That is, we wish to infer the z j on the basis of the feature data y j.

21 1.9 Cont. After we fit the g-component mixture model to obtain the estimate of the vector of unknown parameters in the mixture model, we can give a probabilistic clustering of the n feature observations y 1,…,y n in terms of their fitted posterior probabilities of component membership. For each y j, the g probabilities  1 (y j ; ),…,  g (y j ; ) give the estimated posterior probabilities that this observation belongs to the first, second,…, and g th component, respectively, of the mixture (j=1,…,n).

22 1.9 Cont. We can give an outright or hard clustering of these data by assigning each y j to the component of the mixture to which it has the highest posterior probability of belonging. That is, we estimate the component-label vector z j by, where is defined by for i=1,…,g; j=1,…,n. (14)

23 1.10 Testing for the Number of Components In some applications of mixture models, there is sufficient a priori information for the number of components g in the mixture model to be specified with no uncertainty. For example, this would be the case where the components correspond to externally existing groups in which the feature vector is known to be normally distributed.

24 1.10 Cont. However, on many occasions, the number of components has to be inferred from the data, along with the parameters in the component densities. If, say, a mixture model is being used to describe the distribution of some data, the number of components in the final version of the model may be of interest beyond matters of a technical or computational nature.

25 2. Application of EM algorithm 2.1 Estimation of Mixing Proportions Suppose that the density of the random vector Y j has a g-component mixture from where  =(  1,….,  g-1 ) T is the vector containing the unknown parameters, namely the g-1 mixing proportions  1,…,  g-1, since (15)

26 2.1 cont. In order to pose this problem as an incomplete-data one, we now introduce as the unobservable or missing data the vector where z j is the g-dimensional vector of zero-one indicator variables as defined above. If these z ij were observable, then the MLE of  i is simply given by (18) (19)

27 2.1 Cont. The EM algorithm handles the addition of the unobservable data to the problem by working with Q(  ;  (k) ), which is the current conditional expectation of the complete-data log likelihood given the observed data. On defining the complete- data vector x as (20)

28 2.1 Cont. the complete-data log likelihood for Y has the multinomial form where does not depend on . (21)

29 2.1 Cont. As (21) is linear in the unobservable data z ij, the E-step (on the (k+1) th iteration) simply requires the calculation of the current conditional expectation of Z ij given the observed data y, where Z ij is the random variable corresponding to z ij. Now (22)

30 2.1 Cont. where by Bayes Theorem, for i=1,…,g; j=1,…,n. The quantity  i (y j ;  (k) ) is the posterior probability that the j th member of the sample with observed value y j belongs to the i th component of the mixture. (23)

31 2.1 Cont. The M-step on the (k+1) th iteration simply requires replacing each z ij by  ij (k) in (19) to give for i=1,…,g. (24)

32 2.2 Example 2.1: Synthetic Data Set 1 We generated a random sample of n=50 observations y 1,…,y n from a mixture of two univariate normal densities with means  1 =0 and  2 =2 and common variance  2 =1 in proportions  1 =0.8 and  2 =0.2.

33 Iteration k Table 1: Results of EM Algorithm for Example on Estimation of Mixing Proportions

34 2.3 Univariate Normal Component Densities The normal mixture model to be fitted is thus where (28)

35 2.3 Cont. The complete-data log likelihood function for Y is given by (21), but where now

36 2.3 Cont. The E-Step is the same as before, requiring the calculation of (23). The M-step now requires the computation of not only (24), but also the values and  (k+1) 2 that, along with maximize Q(  ;  (k) ). Now are the MLE’s of  i and  2 respectively, if the z ij were observable. and (29)

37 2.3 Cont. As logL c (  ) is linear in the z ij, it follows that the z ij in (29) and (30) are replaced by their current conditional expectations, which here are the current estimates  i (y j ;  (k) ) of the posterior probabilities of membership of the components of the mixture, given by

38 2.3 Cont. This yields and and is given by (24). (31) (32)

39 2.4 Multivariate Component Densities (34) (35)

40 2.4 Cont. In the case of normal components with arbitrary covariance matrices, equation (35) is replaced by (36)

41 2.5 Starting Values for EM Algorithm The EM algorithm is started from some initial value of ,  (0). Hence in practice we have to specify a value for  (0). Hence an alternative approach is to perform the first E-step by specifying a value  j (0) for  (y j ;  ) for each j (j=1,…,n), where is the vector containing the g posterior probabilities of component membership for y j,

42 2.5 Cont. The latter is usually undertaken by setting  j (0) =z j (0) for j=1,…,n, where defines an initial partition of the data into g groups. For example, an ad hoc way of initially partitioning the data in the case of, say, a mixture of g=2 normal components with the same covariance matrices, would be to plot the data for selections of two of the p variables, and then draw a line that divides the bivariate data into two groups that have a scatter that appears normal.

43 2.5 Cont. For higher dimensional data, an initial value z (0) for z might be obtained through the use of some clustering algorithm, such as k- means or, say, an hierarchical procedure if n is not too large. Another way of specifying an initial partition z (0) of the data is to randomly divide the data into g groups corresponding to the g components of the mixture model.

44 2.6 Example 2.2: Synthetic Data Set 2

45 2.7 Example 2.3: Synthetic Data Set 3

46  True ValuesInitial ValuesEstimates by EM 1 2 3 11 (0 –2) T (-1 0) T ( –1.961) T 22 (0 0) T ( ) T 33 (0 2) T (1 0) T ( ) T 11 11 11

47 Figure 7

48 Figure 8

49 2.8 Provision of Standard Errors One way of obtaining standard errors of the estimates of the parameters in a mixture model is to approximate the covariance matrix of by the inverse of the observed information matrix, which is given by the negative of the Hessian matrix of the log likelihood evaluated at the MLE. It is important to emphasize that estimates of the covariance matrix of the MLE based on the expected or observed information matrices are guaranteed to be valid inferentially only asymptotically.

50 2.8 Cont.  In particular for mixture models, it is well known that the sample size n has to be very large before the asymptotic theory of maximum likelihood applies.  Hence we shall now consider a resampling approach, the bootstrap, to this problem.  Standard error estimation of may be implemented according to the bootstrap as follows.

51 Step 1 A new set of data, y*, called the bootstrap sample, is generated according to, an estimate of the distribution function of Y formed from the original observed data y. That is, in the case where y contains the observed values of a random sample of size n, y* consists of the observed values of the random sample i.i.d 2.8 Cont. (40)

52 where the estimates (now denoting the distribution function of a single observation Y j ) is held fixed at its observed value. The EM algorithm is applied to the bootstrap observed data y* to compute the MLE for this data set,. Step Cont.

53 The bootstrap covariance matrix of is given by where E* denotes expectation over the bootstrap distribution specified by. Step 3 (41)

54 2.8 Cont. The bootstrap covariance matrix can be approximated by Monte Carlo methods. Steps (1) and (2) are repeated independently a number of times (say, B) to give B independent realizations of, denoted by.

55 2.8 Cont. Then (41) can be approximated by the sample covariance matrix of these B bootstrap replications to give where (42) (43)

56 3 Examples of Normal Mixtures 3.1 Basic Model in Genetics

57 3.2 Example 3.1: PTC Sensitivity Data We report in Table 3, the results of Jones and McLachlan (1991) who fitted a mixture of three normal components to data on phenylthiocarbamide (PTC) sensitivity for three groups of people.

58 ParameterData Set 1Data Set 2Data Set 3 pApA 0.572(.027)0.626(.025)0.520(.026) 11 2.49(.15)1.62(.14)1.49(.09) 22 9.09(.18)8.09(.15)7.47(.47) 3 (.28)8.63(.50)9.08(.08) 1.34(.29)1.44(.28)0.34(.09) 2.07(.39)1.19(.22)6.23(2.06) 0.57(.33)0.10(.18)0.48(.10) Test statistic: -2log (  2 2 =  3 2 ) log (HWE) Table 3: Fit of Mixture Model to Three Data Sets

59 3.3 Example 3.2: Screening for Hemochronatosis We consider the case study of McLaren et al. (1998) on the screening for hemochromatosis.

60 3.3 Cont. Studies have suggested that mean transferrin saturation values for heterozygotes are higher than among unaffected subjects, but lower than homozygotes. Since the distribution of transferrin saturation is known to be well approximated by a single normal distribution in unaffected subjects, the physiologic models used in the study of McLaren et al. (1998) were a single normal component and a mixture of two normal components.

61 Table 4: Transferrin Saturation Results Expressed as Mean Percentage  SD. Asymptomatic Individuals Individual Identified by Pedigree Analysis Sex Postulated Unaffected Postulated Heterozygotes Known Heterozygotes Known Homozygotes Male 24.1     14.4 Female 22.5     19.3

62 Figure 9: Plot of the densities of the mixture of two normal heteroscedastic components fitted to some transferrin values on asymptomatic Australians.

63 3.4 Example 3.3:Crab Data Figure 10: Plot of Crab Data

64 3.4 Cont. Progress of fit to Crab Data

65 Figure 11: Contours of the fitted component densities on the 2 nd & 3 rd variates for the blue crab data set.

66 3.5 Choice of Local Maximizer The choice of root of the likelihood equation in the case of homoscedastic components is straightforward in the sense that the MLE exists as the global maximizer of the likelihood function. The situation is less straightforward in the case of heteroscedastic components as the likelihood function is unbounded.

67 3.5 Cont. But assuming the univariate result of Hathaway (1985) extends to the case of multivariate normal components, then the constrained global maximizer is consistent provided the true value of the parameter vector  belongs to the parameter space constrained so that the component generalized variances are not too disparate; for example, (46)

68 3.5 Cont If we wish to proceed in the heteroscedastic case by the prior imposition of a constraint of the form (46), then there is the problem of how small the lower bound C must be to ensure that the constrained parameter space contains the true value of the parameter vector .

69 3.5 Cont Therefore to avoid having to specify a value for C beforehand, we prefer where possible to fit the normal mixture without any constraints on the component covariances  i. It thus means we have to be careful to check that the EM algorithm has actually converged and is not on its way to a singularity which exists since the likelihood is unbounded for unequal component- covariance matrices.

70 3.5 Cont Even if we can be sure that the EM algorithm has converged to a local maximizer, we have to be sure that it is not a spurious solution that deserves to be discarded. After these checks, we can take the MLE of  to be the root of the likelihood equation corresponding to the largest of the remaining local maxima located.

71 3.6 Choice of Model for Component-Covariance Matrices A normal mixture model without restrictions on the component-covariance matrices may be viewed as too general for many situations in practice. At the same time, though, we are reluctant to impose the homoscedastic condition  i =  (i=1,…,g), as we have noted in our analyses that the imposition of the constraint of equal component-covariance matrices can have a marked effect on the resulting estimates and the implied clustering. This was illustrated in Example 3.3.

72 3.7 Spurious Local Maximizers In practice, consideration has to be given to the problem of relatively large local maxima that occur as a consequence of a fitted component having a very small (but nonzero) variance for univariate data or generalized variance (the determinant of the covariance matrix) for multivariate data.

73 3.7 Cont. Such a component corresponds to a cluster containing a few data points either relatively close together or almost lying in a lower dimensional subspace in the case of multivariate data. There is thus a need to monitor the relative size of the fitted mixing proportions and of the component variances for univariate observations, or of the generalized component variances for multivariate data, in an attempt to identify these spurious local maximizers.

74 3.8 Example 3.4: Synthetic Data Set 4 Local Max.log L 11 11 22 1 2    3 (binned) Table 5: Local Maximizers for Synthetic Data Set 4.

75 Figure 12: Histogram of Synthetic Data Set 4 for fit  2 of the normal mixture density.

76 Figure 13: Histogram of Synthetic Data Set 4 for fit  1 of the normal mixture density.

77 Figure 14: Histogram of Synthetic Data Set 4 for fit  3 of the normal mixture density.

78 3.9 Example 3.5:Galaxy Data Set Figure 15: Plot of fitted six-component normal mixture density for galaxy data set

79 Table 6: A Six-Component Normal Mixture Solution for the Galaxy Data Set. Component i ii ii

80 4.1 Mixtures of t Distributions where (49) (50)

81 4.2 ML Estimation where The update estimates of  i and  i (i=1,…,g) are given by (53)

82 4.2 Cont. and (54)

83 4.2 Cont. It follows that is a solution of the equation Where and is the Digamma function. (55)

84 Example 4.1:Noisy Data Set A sample of 100 points was simulated from

85 Example 4.1:Noisy Data Set To this simulated sample 50 noise points were added from a uniform distribution over the range - 10 to 10 on each variate.

86 4.1 True Solution

87 4.1 Normal Mixture Solution

88 4.1 Normal + Uniform Solution

89 4.1 t Mixture Solution

90 4.1 Comparison of Results True Solution Normal Mixture Solution Normal + Uniform Mixture t Mixture Solution

91 5. Number of Components in a Mixture Model Testing for the number of components g in a mixture is an important but very difficult problem which has not been completely resolved. We have seen that finite mixture distributions are employed in the modeling of data with two main purposes in mind.

92 5.1 Cont  One is to provide an appealing semiparametric framework in which to model unknown distributional shapes, as an alternative to, say, the kernel density method.  The other is to use the mixture model to provide a model-based clustering. In both situations, there is the question of how many components to include in the mixture.

93 5.1 Cont. In the former situation of density estimation, the commonly used criteria of AIC and BIC would appear to be adequate for choosing the number of components g for a suitable density estimate.

94 5.2 Order of a Mixture Model A mixture density with g components might be empirically indistinguishable from one with either fewer than g components or more than g components. It is therefore sensible in practice to approach the question of the number of components in a mixture model in terms of an assessment of the smallest number of components in the mixture compatible with the data.

95 5.2 Cont. To this end, the true order g 0 of the g- component mixture model is defined to be the smallest value of g such that all the components f i (y;   ) are different and all the associated mixing proportions  i are nonzero. (57)

96 5.3 Adjusting for Effect of Skewness on LRT We now consider the effect of skewness on hypothesis tests for the number of components in normal mixture models. The Box-Cox (1964) transformation can be employed initially in an attempt to obtain normal components. Hence to model some univariate data y 1,…,y n by a two- component mixture distribution, the density of Y j is taken to be

97 5.3 Cont. where (58) (59)

98 5.3 Cont. and where the last term on the RHS of (58) corresponds to the Jacobian of the transformation from to. Gutierrez et al. (1995) adopted this mixture model of transformed normal components in an attempt to identify the number of underlying physical phenomena behind tomato root initiation. The observation y j corresponds to the inverse proportion of the j th lateral root which expresses GUS (j=1,…,40).

99 Figure 20: Kernel density estimate and normal Q-Q plot of the y j. From Gutierrez et al. (1995).

100 Figure 21: Kernel density estimate and normal Q-Q plot of the y j -1. From Gutierrez et al. (1995).

101 5.4 Example 5.1:1872 Hidalgo Stamp Issue of Mexico Izenman and Sommer (1988) considered the modeling of the distribution of stamp thickness for the printing of a given stamp issue from different types of paper. Their main concern was the application of the nonparametric approach to identify components by the resulting placement of modes in the density estimate.

102 5.4 Cont. The specific example of a philatelic mixture, the 1872 Hidalgo issue of Mexico, was used as a particularly graphic demonstration of the combination of a statistical investigation and extensive historical data to reach conclusions regarding the mixture components.

103 (Izenman & Sommer) (Basford et al.) g=3 Figure 23Figure 22 Figure 24Figure 25 g=7 g=4 (equal variances)

104 Figure 26Figure 27 Figure 28 g=8 (equal variances) g=5 (equal variances) g=7 (equal variances)

105 Table 7:Value of the Log Likelihood for g=1 to 9 Normal Components. Number of Components Unrestricted Variances Equal Variances

106 5.6 Likelihood Ratio Test Statistic An obvious way of approaching the problem of testing for the smallest value of the number of components in a mixture model is to use the LRTS, -2log. Suppose we wish to test the null hypothesis, versus for some g 1 >g 0. (60)(61)

107 5.6 Cont. Usually, g 1 =g 0 +1 in practice as it is common to keep adding components until the increase in the log likelihood starts to fall away as g exceeds some threshold. The value of this threshold is often taken to be the g 0 in H 0. Of course it can happen that the log likelihood may fall away for some intermediate values of g only to increase sharply at some larger value of g, as in Example 5.1.

108 5.6 Cont. We let denote the MLE of calculated under H i, (i=0,1). Then the evidence against H 0 will be strong if is sufficiently small, or equivalently, if -2log is sufficiently large, where (62)

109 5.7 Bootstrapping the LRTS McLachlan (1987) proposed a resampling approach to the assessment of the P-value of the LRTS in testing for a specified value of g 0. (63)

110 Figure 29a: Acidity data set. 5.8 Application to Three Real Data Sets

111 Figure 29b: Enzyme data set.

112 Figure 29c: Galaxy data set.

113 Table 9: P-Values Using Bootstrap LRT. P-Value for g (versus g+1) Data Set Acidity Enzyme Galaxy

114 5.9 Akaike’s Information Criterion  Akaike’s Information Criterion (AIC) selects the model that minimizes where d is equal to the number of parameters in the model. (65)

115 5.10 Bayesian Information Criterion  The Bayesian information criterion (BIC) of Schwarz (1978) is given by as the penalized log likelihood to be maximized in model selection,including the present situation for the number of components g in a mixture model. (66)

116 5.11 Integrated Classification Likelihood Criterion where (67)

117 6. Mixtures of Nonnormal Components We first consider the case of mixed feature variables, where some are continuous and some are categorical. We shall outline the use of the location model for the component densities, as in Jorgensen and Hunt (1996), Lawrence and Krzanowski (1996), and Hunt and Jorgensen (1999).

118 6.1 Cont. The ML fitting of commonly used components, such as the binomial and Poisson, can be undertaken within the framework of a mixture of generalized linear models (GLMs). This mixture model also has the capacity to handle the regression case, where the random variable Y j for the j th entity is allowed to depend on the value x j of a vector x of covariates. If the first element of x is taken to be one, then we can specialize this model to the nonregression situation by setting all but the first element in the vector of regression coefficients to zero.

119 6.1 Cont One common use of mixture models with discrete data is to handle overdispersion in count data. For example, in medical research, data are often collected in the form of counts, corresponding to the number of times that a particular event of interest occurs. Because of their simplicity, one-parameter distributions for which the variance is determined by the mean are often used, at least in the first instance to model such data. Familiar examples are the Poisson and binomial distributions, which are members of the one-parameter exponential family.

120 6.1 Cont However, there are many situations, where these models are inappropriate, in the sense that the mean-variance relationship implied by the one-parameter distribution being fitted is not valid. In most of these situations, the data are observed to be overdispersed; that is, the observed sample variance is larger than that predicted by inserting the sample mean into the mean- variance relationship. This phenomenon is called overdispersion.

121 6.1 Cont These phenomena are also observed with the fitting of regression models, where the mean (say, of the Poisson or the binomial distribution), is modeled as a function of some covariates. If this dispersion is not taken into account, then using these models may lead to biased estimates of the parameters and consequently incorrect inferences about the parameters.

122 6.1 Cont Concerning mixtures for multivariate discrete data, a common application arises in latent class analyses, in which the feature variables (or response variables in a regression context) are taken to be independent in the component distributions. This latter assumption allows mixture models in the context of a latent class analysis to be fitted within the above framework of mixtures of GLMs.

123 6.2 Mixed Continuous and Categorical Variables We consider now the problem of fitting a mixture model To some data, y=(y 1 T,…,y n T ) T, where where some of the feature variables are categorical. (70)

124 6.2 Cont. The simplest way to model the component densities of these mixed feature variables is to proceed on the basis that the categorical variables are independent of each other and of the continuous feature variables, which are taken to have, say, a multivariate normal distribution. Although this seems a crude way in which to proceed, it often does well in practice as a way of clustering mixed feature data. In the case where there are data of known origin available, this procedure is known as the naive Bayes classifier.

125 6.2 Cont. We can refine this approach by adopting the location model. Suppose that p 1 of the p feature variables in Y j are categorical, where the q th categorical variable takes on m q distinct values (q=1,…,p 1 ). Then there are distinct patterns of these p 1 categorical variables. With the location model, the p 1 categorical variables are replaced by a single multinomial random variable Y j (1) with m cells; that is, ( Y j (1)} s =1 if the realizations of the p 1 categorical variables in Y j correspond to the s th pattern

126 6.2 Cont. Any associations between the original categorical variables are converted into relationships among the resulting multinomial cell probabilities. The location model assumes further that conditional on (y j (1) ) s =1 and membership of the i th component of the mixture model, the distribution of the p-p 1 continuous feature variables is normal with mean  is and covariance matrix  i, which is the same for all cells s.

127 6.2 Cont. The intent in MULTIMIX is to divide the feature vector into as many subvectors as possible that can be taken to be independently distributed. The extreme form would be to take all p feature variables to be independent and to include correlation structure where necessary.

128 6.3 Example 6.1: Prostate Cancer Data  To illustrate the approach adopted in MULTIMIX, we report in some detail a case study of Hunt and Jorgensen (1999).  They considered the clustering of patients on the basis of pretrial covariates alone for some prostate cancer clinical trial data. This data set was obtained from a randomized clinical trial comparing four treatments for n=506 patients with prostatic cancer grouped on clinical criteria into Stages 3 and 4 of the disease.

129 6.3 Cont. As reported by Byar and Green (1980), Stage 3 represents local extension of the disease without evidence of distant metastasis, while Stage 4 represents distant metastasis as evidenced by elevated acid phosphatase, X-ray evidence, or both.

130 Table 10:Pretreatment Covariates Covariate AbbreviationNumber of Levels (if Categorical) Age Weight indexWtI Performance ratingPF 4 Cardiovascular disease historyHX 2 Systolic blood pressureSBP Diastolic blood pressureDBP Electrocardigram codeEKG 7 Serum hemoglobin HG Size of primary tumor SZ Index of tumor stage & histolic gradeSG Serum prostatic acid phosphataseAP Bone metastasesBM2

131 Table 11: Models and Fits. ModelVariable GroupsNo. of Parameters d Log Lik [ind] [2]{ SBP,DBP } [3,2]{ BM,WtI,HG},{SBP,DBP } [5]{ BM,WtI,HG,SBP,DBP } [9] Complement of {PF,HX,EKG}

132 Table 12: Clusters and Outcomes for Treated and Untreated Patients. PatientOutcome Group AliveProstate Dth.Cardio Dth.Other Dth. Untreated Patients Cluster 1Stage Stage Cluster 2Stage Stage Treated Patients Cluster 1Stage Stage Cluster 2Stage Stage

133 6.4 Generalized Linear Model With the generalized linear model (GLM) approach originally proposed by Nelder and Wedderburn (1972), the log density of the (univariate) variable Y j has the form where  j is the natural or canonical parameter,  is the dispersion parameter, and m j is the prior weight. (71)

134 6.4 Cont. The mean and variance of Y j are given by and respectively, where the prime denotes differentiation with respect to  j

135 6.4 Cont. In a GLM, it is assumed that where x j is a vector of covariates or explanatory variables on the j th response y j  is a vector of unknown parameters, and h(·) is a monotonic function known as the link function. If the dispersion parameter  is known, then the distribution (71) is a member of the (regular) exponential family with natural or canonical parameter  j.

136 6.4 Cont. The variance of Y j is the product of two terms, the dispersion parameter  and the variance function b''(  j ), which is usually written in the form So-called natural or canonical links occur when  j =  j, which are respectively the log and logit functions for the Poisson and binomial distributions;

137 6.4 Cont. The likelihood equation for  can be expressed as where  '(  j )=d  j /d  j and w(  j ) is the weight function defined by It can be seen that for fixed , the likelihood equation for  is independent of . (73)

138 6.4 Cont. The likelihood equation (73) can be solved iteratively using Fisher's method of scoring, which for a GLM is equivalent to using iteratively reweighted least squares (IRLS); see Nelder and Wedderburn (1972). On the (k+1) th iteration, we form the adjusted response variable as (74)

139 6.4 Cont. These n adjusted responses are then regressed on the covariates x 1,…,x n using weights m 1 w(  1 (k) ), …, m n w(  n (k) ). This produces an updated estimate  (k+1) for , and hence updated estimates  j (k+1) for the  j, for use in the right-hand side of (74) to update the adjusted responses, and so on. This process is repeated until changes in the estimates are sufficiently small.

140 6.5 Mixtures of GLMs For a mixture of g component distributions of GLMs in proportions  1 …,  g, we have that the density of the j th response variable Y j is given by where for a fixed dispersion parameter  i, (75) (76)

141 6.5 Cont. for the i th component GLM, we let mij be the mean of Y j, h i (  ij ) the link function, and the linear predictor (i=1,…g). A common model for expressing the i th mixing proportion  i as a function of x is the logistic. Under this model, we have corresponding to the j th observation y j with covariate vector x j

142 6.5 Cont. contains the logistic regression coefficients. The first element of x j is usually taken to be one, so that the first element of each w i is an intercept. The EM algorithm of Dempster et al. (1977) can be applied to obtain the MLE of  as in the case of a finite mixture of arbitrary distributions. where and (78)

143 6.5 Cont. As the E-step is essentially the same as for arbitrary component densities, we move straight to the M-step. If the  1,…,  g have no elements in common a priori, then (82) reduces to solving separately for each  i to produce  i (k+1) (i=1,…,g). (83)

144 6.6 A General ML Analysis of Overdispersion in a GLM In an extension to a GLM for overdispersion, a random effect U j can be introduced additively into a GLM on the same scale as the linear predictor, as proposed by Aitkin (1996). This extension in a two-level variance component GLM has been considered recently by Aitkin (1999).

145 6.6 Cont. For an unobservable random effect u j for the j th response on the same scale as the linear predictor, we have that where u j is realization of a random variable U j distributed N(0,1) independently of the j th response Y j (j=1,…, n).

146 6.6 Cont. The (marginal) log likelihood is thus The integral (84) does not exist in closed form except for a normally distributed response y j. Following the development in Anderson and Hinde (1988), Aitkin (1996, 1999) suggested that it be approximated by Gaussian quadrature, whereby the integral over the normal distribution of U is replaced by a finite sum of g Gaussian quadrature mass-points u i with masses  i ; the u i and  i are given in standard references, for example, Abramowitz and Stegun (1964). ( 84)

147 6.6 Cont. The log likelihood so approximated thus has the form for that of a g-component mixture model, where the masses  1,…,  g correspond to the (known) mixing proportions, and the corresponding mass points u 1,…,u g to the (known) parameter values. The linear predictor for the j th response in the i th component of the mixture is

148 6.6 Cont. Hence in this formulation, u i is an intercept term. Aitkin (1996, 1999) suggested treating the masses  1,…,  g as g unknown mixing proportions and the mass points u 1,…,u g as g unknown values of a parameter. This g- component mixture model is then fitted using the EM algorithm, as described above.

149 6.6 Cont. In this framework since now u i is also unknown, we can drop the scale parameter  and define the linear predictor for the j th response in the i th component of the mixture as Thus u i acts as an intercept parameter for the i th component. One of the u i parameters will be aliased with the intercept term  0 ; alternatively, the intercept can be removed from the model.

150 6.7 Example 6.2: Fabric Faults Data We report the analysis by Aitkin (1996), who fitted a Poisson mixture regression model to a data set on some fabric faults.

151 Table 14: Results of Fitting Mixtures of Poisson Regression Models. g 00 11 11 22 33 Deviance (SE) (1)(1)(2)(2)(3)(3) (0.201)(0.203)(0.797) (0.201)(0.202)(0.711)(0.087)

152 7.1 Mixture Models for Failure- Time Data It is only in relatively recent times that the potential of finite mixture models has started to be exploited in survival and reliability analyses.

153 7.1 Cont. In the analysis of failure-time data, it is often necessary to consider different types of failure. For simplicity of exposition, we shall consider the case of g=2 different types of failure or causes, but the results extend to an arbitrary number g. An item is taken to have failed with the occurrence of the first failure from either cause, and we observe the time T to failure and the type of failure i, (i=1,2). In the case where the study terminates before failure occurs, T is the censoring time and the censoring indicator  is set equal to zero to indicate that the failure time is right-censored.

154 7.1 Cont. The traditional approach to the modeling of the distribution of failure time in the case of competing risks is to postulate the existence of so-called latent failure times, T 1 and T 2, corresponding to the two causes and to proceed with the modeling of T=min (T 1,T 2 ) on the basis that the two causes are independent of each other.

155 7.1 Cont. An alternative approach is to adopt a two- component mixture model, whereby the survival function of T is modeled as where the i th component survival function S i (t;x) denotes the conditional survival function given failure is due to the i th cause, and  i (x) is the probability of failure from the i th cause (i=1,2). (85)

156 7.1 Cont. Here x is a vector of covariates associated with the item. It is common to assume that the mixing proportions  i (x) have the logistic form, where  =(a,b T ) T is the vector of logistic regression coefficients. (86)

157 7.2 ML Estimation for Mixtures of Survival Functions The log likelihood for  that can be formed from the observed data y is given by where I [h] (  j ) is the indicator function that equals one if  j =h(h=0,1,2). (88)

158 7.3 Example 7.1: Heart-Valve Data To illustrate the application of mixture models for competing risks in practice, we consider the problem studied in Ng et al. (1999). They considered the use of the two- component mixture model (85) to estimate the probability that a patient aged x years would undergo a rereplacement operation after having his/her native aortic valve replaced by a xenograft prosthesis.

159 7.3 Cont. At the time of the initial replacement operation, the surgeon has the choice of using either a mechanical valve or a biologic valve, such as a xenograft (made from porcine valve tissue) or an allograft (human donor valve). Modern day mechanical valves are very reliable, but a patient must take blood- thinning drugs for the rest of his/her life to avoid thromboembolic events.

160 7.3 Cont. On the other hand, biologic valves have a finite working life, and so have to be replaced if the patient were to live for a sufficiently long enough time after the initial replacement operation. Thus inferences about the probability that a patient of a given age will need to undergo a rereplacement operation can be used to assist a heart surgeon in deciding on the type of valve to be used in view of the patient's age.

161 Figure 30: Estimated probability of reoperation at a given age of patient.

162 Figure 31: Conditional probability of reoperation (xenograft valve) for specified age of patient.

163 7.4 Conditional Probability of a Reoperation As a patient can avoid a reoperation by dying first, it is relevant to consider the conditional probability of a reoperation within a specified time t after the initial operation given that the patient does not die without a reoperation during this period.

164 7.3 Long-term Survivor Model In some situations where the aim is to estimate the survival distribution for a particular type of failure, a certain fraction of the population, say  1, may never experience this type of failure. It is characterized by the overall survival curve being leveled at a nonzero probability. In some applications, the surviving fractions are said to be “cured.”

165 7.3 Cont. It is assumed that an entity or individual has probability  2 =1-  1 of failing from the cause of interest and probability  1 of never experiencing failure from this cause. In the usual framework for this problem, it is assumed further that the entity cannot fail from any other cause during the course of the study (that is, during follow-up). We let T be the random variable denoting the time to failure, where T=  denotes the event that the individual will not fail from the cause of interest. The probability of this latter event is  1.

166 7.5 Long-term Survivor Model In some situations where the aim is to estimate the survival distribution for a particular type of failure, a certain fraction of the population, say  1, may never experience this type of failure. It is characterized by the overall survival curve being leveled at a nonzero probability. In some applications, the surviving fractions are said to be “cured.’’

167 7.5 Cont. It is assumed that an entity or individual has probability  2 =1-  1 of failing from the cause of interest and probability  1 of never experiencing failure from this cause. In the usual framework for this problem, it is assumed further that the entity cannot fail from any other cause during the course of the study (that is, during follow-up). We let T be the random variable denoting the time to failure, where T=  denotes the event that the individual will not fail from the cause of interest. The probability of this latter event is  1.

168 7.5 Cont. The unconditional survival function of T can then be expressed as where S 2 (t) denotes the conditional survival function for failure from the cause of interest. The mixture model (92) with the first component having mass one at T=  can be regarded as a nonstandard mixture model. (92)

169 8. Mixture Software 8.1 EMMIX  McLachlan, Peel, Adams, and Basford 

170 Other Software  AUTOCLASS  BINOMIX  C.A.MAN  MCLUST/EMCLUST  MGT  MIX  MIXBIN

171 Other Software(cont.)  Program for Gompertz Mixtures  MPLUS  MULTIMIX  NORMIX  SNOB  Software for Flexible Bayesian Modeling and Markov Chain Sampling


Download ppt "Applying Finite Mixture Models Presenter: Geoff McLachlan."

Similar presentations


Ads by Google