Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Advanced analysis: Classification, clustering and other multivariate methods. Statistics for Microarray Data Analysis – Lecture 4 The Fields Institute.

Similar presentations


Presentation on theme: "1 Advanced analysis: Classification, clustering and other multivariate methods. Statistics for Microarray Data Analysis – Lecture 4 The Fields Institute."— Presentation transcript:

1 1 Advanced analysis: Classification, clustering and other multivariate methods. Statistics for Microarray Data Analysis – Lecture 4 The Fields Institute for Research in Mathematical Sciences May 25, 2002

2 2 Cluster and discriminant analysis These techniques group, or equivalently classify, observational units on the basis of measurements. They differ according to their aims, which in turn depend on the availability of a pre-existing basis for the grouping. In cluster analysis, there are no predefined groups or labels for the observations, while discriminant analysis is based on the existence of such groups or labels. Alternative terminology –Computer science: unsupervised and supervised learning. –Microarray literature: class discovery and class prediction.

3 3 Tumor classification A reliable and precise classification of tumors is essential for successful diagnosis and treatment of cancer. Current methods for classifying human malignancies rely on a variety of morphological, clinical, and molecular variables. In spite of recent progress, there are still uncertainties in diagnosis. Also, it is likely that the existing classes are heterogeneous. DNA microarrays may be used to characterize the molecular variations among tumors by monitoring gene expression on a genomic scale. This may lead to a more reliable classification of tumors.

4 4 Tumor classification, cont There are three main types of statistical problems associated with tumor classification: 1. The identification of new/unknown tumor classes using gene expression profiles - cluster analysis; 2. The classification of malignancies into known classes - discriminant analysis; 3. The identification of “marker” genes that characterize the different tumor classes - variable selection. These issues are relevant to many other questions, e.g. characterizing/classifying neurons or the toxicity of chemicals administered to cells or model animals.

5 5 Clustering microarray data We can cluster genes (rows), mRNA samples (cols), or both at once. Clustering leads to readily interpretable figures. Clustering strengthens the signal when averages are taken within clusters of genes (Eisen). Clustering can be helpful for identifying patterns in time or space. Our main experience is in this area. Clustering is useful, perhaps essential, when seeking new subclasses of cell samples (tumors, etc).

6 6 Recap: cDNA gene expression data Data on p genes for n mRNA samples Genes mRNA samples x ij = expression level of gene i in mRNA sample j = Log( Red intensity / Green intensity) sample1sample2sample3sample4sample5 … 1 0.46 0.30 0.80 1.51 0.90... 2-0.10 0.49 0.24 0.06 0.46... 3 0.15 0.74 0.04 0.10 0.20... 4-0.45-1.03-0.79-0.56-0.32... 5-0.06 1.06 1.35 1.09-1.09... y j = tumor class for sample j.

7 7 The problem of finding patterns In many situations we have a set of hybridizations with temporal or spatial aspects to them, and seek genes exhibiting patterns. An early example was the yeast cell cycle data, a relatively easy case, as the nature of the pattern of interest was clear. Harder examples include developmental time series, or the current case study (the mouse olfactory bulb). Here genes are “on” or “off”, or “come on” and “go off” separately and jointly, according to their own programs. In a sense every gene has its own pattern, though we expect some sharing of patterns. But most importantly, we usually don’t know the patterns a priori. Our problem seems to be distinguishing interesting or real patterns from meaningless variation, at the level of the gene.

8 8 The problem, continued Current solutions to our problem include the various forms of principal component analysis (PCA), e.g. singular value decomposition (SVD) and other descriptive multivariate techniques, and, of course, cluster analysis. These all seem to based on the idea that we are mostly interested in patterns shared by enough genes to stand out. Can we do better? How can contextual information be used to guide our search? Have I even posed a meaningful question? Few answers here.

9 9 Strategies to find patterns Two principal approaches (that I can see) 1a. Find the genes whose expression fits specific, predefined patterns. 1b. Find the genes whose expression follows the pattern of predefined gene or set of genes. 2.Carry out some kind of exploratory analysis to see what expression patterns emerge. I plan to briefly touch on 1a, and then discuss approach 2.

10 10 Some comments on strategy 2 In essence we are using the large-scale hybridizations as a screen. Some reasonable fraction of genes with presumptive ‘interesting patterns’ will have these checked by in situ hybs or something similar. In essence we want to select genes based on our analysis which gives a good hit rate (low false positive rate), certainly compared to random selection or checking out of ‘candidate’ genes. Some patterns are ‘stronger’ than others, and we would like to see the full range of ‘real’ patterns in our data, including real but weak ones. A strategy we tried early on was this: simply rank genes on variation across the set of hybs. The problem was that the highly ranked genes tended to give the same few patterns: we didn’t see the full range.

11 11 Strategy 1a: identifying genes with specific, predefined patterns

12 12 Finding early and late response genes in a short times series 1/2 1 4 24 Time M Which genes increase or decrease like the function x 2 ? 1 16 576 1/4 (u)

13 13 Doing this in the vector world u For each gene, we have a vector, y, of expression estimates at the different time points Project the vector onto the space spanned by the vector u (the values of x 2 at our time points). C is the scalar product u C y C y

14 14 Use a qq-plot of C to identify a genes with “significant” projections along u.

15 15 Strategy 2: pattern discovery

16 16 Clustering microarray data We can cluster cell samples (cols), e.g. neuron cells, for identification (profiles). Also, we might want to estimate the number of different neuron cell types in a set of samples, based on gene expression levels. We can cluster genes (rows), e.g. using large numbers of yeast experiments, to identify groups of co-regulated genes. We can cluster genes (rows) to reduce redundancy (cf. variable selection) in predictive models.

17 17 Row & column clusters Taken from Nature February, 2000. Paper by A Alizadeh et al. Distinct types of diffuse large B-cell lymphoma identified by Gene expression profiling.

18 18 Discovering tumor subclasses

19 19 Basic principles of clustering Aim: to group observations that are “similar” based on predefined criteria. Issues: Which genes / arrays to use? Which similarity or dissimilarity measure? Which clustering algorithm ? It is advisable to reduce the number of genes from the full set to some more manageable number, before clustering. The basis for this reduction is usually quite context specific, see later example.

20 20 There are many measures of (di)ssimilarity Euclidean distance; Mahalanobis distance; Manhattan metric; Minkowski metric; Canberra metric; one minus correlation; etc. And: there are many methods of clustering…..

21 21 Partitioning methods Partition the data into a prespecified number k of mutually exclusive and exhaustive groups. Iteratively reallocate the observations to clusters until some criterion is met, e.g. minimize within cluster sums of squares. Examples: –k-means, self-organizing maps (SOM), PAM, etc.; –Fuzzy: needs stochastic model, e.g. Gaussian mixtures.

22 22 Hierarchical methods Hierarchical clustering methods produce a tree or dendrogram. They avoid specifying how many clusters are appropriate by providing a partition for each k obtained from cutting the tree at some level. The tree can be built in two distinct ways –bottom-up: agglomerative clustering; –top-down: divisive clustering.

23 23 Agglomerative methods Start with n clusters. At each step, merge the two closest clusters using a measure of between-cluster disssimilarity, which reflects the shape of the clusters. Between-cluster dissimilarity measures –Unweighted Pair Group Method with Arithmetic mean (UPGMA): average of pairwise dissimilarities. –Single-link: minimum of pairwise dissimilarities. –Complete-link: maximum of pairwise dissimilarities.

24 24 Divisive methods Start with only one cluster. At each step, split clusters into two parts. Advantages. Obtain the main structure of the data, i.e. focus on upper levels of dendogram. Disadvantages. Computational difficulties when considering all possible divisions into two groups.

25 25 Partitioning vs. hierarchical Partitioning: Advantages Optimal for certain criteria. Disadvantages Need initial k; Often require long computation times. Hierarchical Advantages Faster computation. Disadvantages Rigid; Cannot correct later for erroneous decisions made earlier.

26 26 Three generic clustering problems Three important tasks (which are generic) are: 1. Estimating the number of clusters; 2. Assigning each observation to a cluster; 3. Assessing the strength/confidence of cluster assignments for individual observations. Not equally important in every problem. We now return to our mouse olfactory bulb experiment.

27 27 Recall: we have sets of contrasts (two forms) across the bulb as our patterns Gene #15,228

28 28 Patterns and a metric on them Simply ranking genes according to the size of  i produced a list dominated by similar patterns. Sub aim: To identify genes with different gene expression patterns. The slight novelty here is that we are clustering on estimated contrasts from the data, not the log ratios themselves. Aim: To identify spatial patterns of differential gene expression. For every gene, we can define a magnitude for its expression profile.

29 29 Clustering genes based on their similarity in expression patterns Start with sets of genes exhibiting some minimal level of differential expression across the bulb - “top 100” from each comparison x 15 comparisons = 635 genes chosen here. Carry out hierarchical clustering of the 635 genes, building a dendrogram using Mahalanobis distance and Ward agglomeration. Now consider all 635 clusters of >2 genes in the tree. Singles are added separately. Measure the heterogeneity h of a cluster by calculating the 15 SDs across the cluster of each of the pairwise effects, and taking the largest. Choose a score s (see plots) and take all maximal disjoint clusters with h < s. Here we used s = 0.46 and obtained 16 clusters.

30 30 Plots guiding choice of clusters of genes Cluster heterogeneity h (max of 15 SDs) Number of clusters (patterns) Number of genes

31 31 Red :genes chosen Blue: :controls 15 p/w effects PDA VA LA DP VP LA MP MA LPVD MD LA LV LM MV LD

32 32 16 Clusters Systematically Arranged (6 point representation) ** ** *

33 33 One Cluster (6 point representation) Thick red line: average Dashed blue: +/-2h

34 34 Visualizing gene expression patterns across regions in our data set and find a set of genes that show different spatial patterns.

35 35 Discrimination

36 36 Discrimination A predictor or classifier for K tumor classes partitions the space X of gene expression profiles into K disjoint subsets, A 1,..., A K, such that for a sample with expression profile x=(x 1,...,x p )  A k the predicted class is k. Predictors are built from past experience, i.e., from observations which are known to belong to certain classes. Such observations comprise the learning set L = (x 1, y 1 ),..., (x n,y n ). A classifier built from a learning set L is denoted by C(.,L): X  {1,2,...,K}, with the predicted class for observation x being C(x,L).

37 37 Fisher linear discriminant analysis First applied in 1935 by M. Barnard at the suggestion of R. A. Fisher (1936), Fisher linear discriminant analysis (FLDA) consists of i. finding linear combinations x a of the gene expression profiles x=(x 1,...,x p ) with large ratios of between-groups to within-groups sums of squares - discriminant variables; ii. predicting the class of an observation x by the class whose mean vector is closest to x in terms of the discriminant variables.

38 38 FLDA

39 39 Maximum likelihood discriminant rules When the class conditional densities pr(x|y=k) are known, the maximum likelihood (ML) discriminant rule predicts the class of an observation x by C(x) = max k pr(x|y=k). For multivariate normal class densities, i.e., for x|y= k ~ N(  k,  k ), this is (in general) a quadratic rule. In practice, population mean vectors and covariance matrices are estimated by the corresponding sample quantities.

40 40 ML discriminant rules - special cases 1. Linear discriminant analysis, LDA. When the class densities have the same covariance matrix,  k  , the discriminant rule is based on the square of the Mahalanobis distance and is linear and given by 2. Diagonal linear discriminant analysis, DLDA. In this simplest case, when the class densities have the same diagonal covariance matrix  = diag(  1 2, …,  p 2 ) Note. Weighted gene voting of Golub et al. (1999) is a minor variant of DLDA for two classes (wrong variance calculation). C(x) = arg min k (x-  )’  -1 (x -  )

41 41 Nearest neighbor rule Nearest neighbor methods are based on a measure of distance between observations, such as the Euclidean distance or one minus the correlation between two gene expression profiles. The k-nearest neighbor rule, due to Fix and Hodges (1951), classifies an observation x as follows: i. find the k observations in the learning set that are closest to x; ii. predict the class of x by majority vote, i.e., choose the class that is most common among those k observations. The number of neighbors k can be chosen by cross- validation.

42 42 Nearest neighbor rule

43 43 Classification trees Binary tree structured classifiers are constructed by repeated splits of subsets (nodes) of the measurement space X into two descendant subsets, starting with X itself. Each terminal subset is assigned a class label and the resulting partition of X corresponds to the classifier. Three main aspects of tree construction are: i. the selection of the splits; ii. the decision to declare a node terminal or to continue splitting; iii. the assignment of each terminal node to a class. Different tree classifiers use different approaches to these three issues. Here, we use CART: Classification And Regression Trees, Breiman et al. (1984).

44 44 Classification trees

45 45 Aggregating predictors Breiman (1996, 1998) found that gains in accuracy could be obtained by aggregating predictors built from perturbed versions of the learning set. In classification, the multiple versions of the predictor are aggregated by voting. Let C(., L b ) denote the classifier built from the bth perturbed learning set L b, and let w b denote the weight given to predictions made by this classifier. The predicted class for an observation x is given by argmax k  b w b I(C(x,L b ) = k).

46 46 Aggregating predictors 1. Bagging. Bootstrap samples of the same size as the original learning set. - non-parametric bootstrap, Breiman (1996); - convex pseudo-data, Breiman (1998). 2. Boosting. Freund and Schapire (1997), Breiman (1998). The data are resampled adaptively so that the weights in the resampling are increased for those cases most often misclassified. The aggregation of predictors is done by weighted voting.

47 47 Prediction votes For aggregated classifiers, prediction votes assessing the strength of a prediction may be defined for each observation. The prediction vote (PV) for an observation x is defined to be PV(x) = max k ∑ b w b I(C(x,L b ) = k) / ∑ b w b. When the perturbed learning sets are given equal weights, i.e., w b =1, the prediction vote is simply the proportion of votes for the “winning'' class, regardless of whether it is correct or not. Prediction votes belong to [0,1].

48 48 Other prediction methods Support vector machines (SVMs) Neural networks Bayesian regression methods

49 49 Datasets Leukemia data, Golub et al. (1999). n=72 tumor mRNA samples; 3 known classes (B-cell ALL, T-cell ALL, AML); p=6,817 genes. Lymphoma data, Alizadeh et al. (2000). n=81 tumor mRNA samples; 3 known classes (CLL, FL, DLBCL); p=4,682 genes. NCI 60 data, Ross et al. (2000). n=64 cell line mRNA samples; 9 known classes; p=5,244 genes.

50 50 Data pre-processing Image analysis and normalization: beyond our control for these data. Imputation: k-nearest neighbor imputation, where genes are``neighbors'' and the similarity measure between two genes is the correlation in their expression profiles. Standardization: Standardize observations (arrays) to have mean 0 and variance 1 across variables (genes).

51 51 Comparison of predictors: Study design The original datasets are repeatedly randomly divided into a learning set and a test set, comprising respectively 2/3 and 1/3 of the data. For each of N=150 runs: Select a subset of p genes from the learning set based on their ratio of between to within-groups sums of squares, BSS/WSS. p=50 for lymphoma, p=40 for leukemia, p=30 for NCI 60. Build the different predictors using the learning sets with p genes. Apply the predictors to the observations in the test set to obtain test set error rates.

52 52 Lymphoma data, 3 classes: Test set error rates; N=150 LS/TS runs.

53 53 Leukemia data, 2 classes: Test set error rates; 150 LS/TS runs.

54 54 Leukemia data, 3 classes: Test set error rates; 150 LS/TS runs.

55 55 NCI 60 data: Test set error rates; 150 LS/TS runs.

56 56 Leukemia data: Image of correlation matrix for 72 mRNA samples based on the top 40 genes for the 3-class BSS/WSS.

57 57 Prediction votes Leukemia data, 2 classes. Top panels: % correct predictions; bottom panels: prediction votes/strengths.

58 58 Results In the main comparison, the nearest neighbor classifier and DLDA had the smallest error rates, while FLDA had the highest error rates. Aggregation improved the performance of CART classifiers, the largest gains being with boosting and bagging with convex pseudo-data. For the lymphoma and leukemia datasets, increasing the number of variables to p=200 didn't greatly affect the performance of the various classifiers. There was an improvement for the NCI 60 dataset. A more careful selection of a small number of genes p=10 improved the performance of FLDA dramatically.

59 59 Discussion ``Diagonal'' LDA: ignoring correlation between genes helped here. Unlike classification trees and nearest neighbors, LDA is unable to take into account gene interactions. Although nearest neighbors are simple and intuitive classifiers, their main limitation is that they give very little insight into mechanisms underlying the class distinctions. Classification trees are capable of handling and revealing interactions between variables. Useful by-product of aggregated classifiers: prediction votes. Variable selection: A crude criterion such as BSS/WSS may not identify the genes that discriminate between all the classes and may not reveal interactions between genes.

60 60 Closing remarks on discrimination This is a very active area of research now. Much larger data sets are becoming available. SVM and other kernel methods are becoming the “methods of choice”, outperforming the ones we have discussed here. The first reference in this area is: Michael P.S. Brown, William Noble Grundy, David Lin, Nello Cristianini, Charles Sugnet, Manuel Ares, David Haussler 1999 Knowledge-based analysis of microarray gene expression data by using support vector machines. PNAS 97: 262- 267.

61 61 Acknowledgments Clustering section based on lecture 3 in Temple short course The olfactory bulb data is provided by Dave Lin. Discriminant analysis section: based on S. Dudoit and J. Fridlyand (Bioconductor short course lecture 5) http://www.bioconductor.org


Download ppt "1 Advanced analysis: Classification, clustering and other multivariate methods. Statistics for Microarray Data Analysis – Lecture 4 The Fields Institute."

Similar presentations


Ads by Google