Presentation is loading. Please wait.

Presentation is loading. Please wait.

SocalBSI 2008: Clustering Microarray Datasets Sagar Damle, Ph.D. Candidate, Caltech  Distance Metrics: Measuring similarity using the Euclidean and Correlation.

Similar presentations


Presentation on theme: "SocalBSI 2008: Clustering Microarray Datasets Sagar Damle, Ph.D. Candidate, Caltech  Distance Metrics: Measuring similarity using the Euclidean and Correlation."— Presentation transcript:

1 SocalBSI 2008: Clustering Microarray Datasets Sagar Damle, Ph.D. Candidate, Caltech  Distance Metrics: Measuring similarity using the Euclidean and Correlation distance metrics  Principle Components Analysis: Reducing the dimensionality of microarray data  Clustering Agorithms:  Kmeans  Self-Organizing Maps (SOM)  Hierarchical Clustering

2 MATRIX genes,conditions = Expression dataset the first genevector = (x 11, x 12, x 13, x 14 … x 1n ) the leftmost condition vector = (x 11, x 21, x 31 … x m1 ) Rows (genes) Columns (conditions [timepoints, or tissues]) x 11, x 12, x 13, … x 1n x 21 x 31, … X m1 … x mn

3  Clustering identifies group of genes with “similar” expression profiles  How is similarity measured?  Euclidian distance  Correlation coefficient  Others: Manhattan, Chebychev, Euclidean Squared Similarity measures

4 In an experiment with 10 conditions, the gene expression profiles for two genes X, and Y would have this form X = (x 1, x 2, x 3, …, x 10 ) Y = (y 1, y 2, y 3, …, y 10 )

5 d(Ga, Gb) = sqrt( (x 1 -y 1 ) 2 + (x 2 -y 2 ) 2 ) Similarity measure - Euclidian distance In general: if there are M experiments: X = (x 1, x 2, x 3, …, x m ) Y = (y 1, y 2, y 3, …, y m ) Gb: (x 1, x 2 ) Ga: (y 1, y 2 )

6 D = 1 - r r = [Z(X)*Z(Y)] (dot product of the z-scores of vectors X and Y) r = |Z(X)| |Z(Y)| cos(T) When two unit vectors are completely correlated, r=1 and D=0 When two unit vectors are non correlated, r=0 and D = 1 Dot product review: http://mathworld.wolfram.com/DotProduct.html Similarity measure – Pearson Correlation Coefficient X = (x 1, x 2, x 3, …, x m ), Y = (y 1, y 2, y 3, …, y m )

7 Euclidian vs Pearson Correlation  Euclidian distance – takes into account the magnitude of the expression  Correlation distance - insensitive to the amplitude of expression, takes into account the trends of the change.  Common trends are considered biologically relevant, the magnitude is considered less important Gene X Gene Y

8 What correlation distance sees What euclidean distance sees

9 Principle Components Analysis (PCA)  A method for projecting microarray data onto a reduced (2 or 3 dimensional) easily visualized space Definition: Principle Components - A set of variables that define a projection that encapsulates the maximum amount of variation in a dataset and is orthogonal (and therefore uncorrelated) to the previous principle component of the same dataset.  Example Dataset : Thousands of genes probed in 10 conditions.  The expression profile of each gene is presented by the vector of its expression levels: X = (X 1, X 2, X 3, X 4, X 5 )  Imagine each gene X as a point in a 5-dimentional space.  Each direction/axis corresponds to a specific condition  Genes with similar profiles are close to each other in this space  PCA- Project this dataset to 2 dimensions, preserving as much information as possible

10

11 PCA transformation of a microarray dataset Visual estimation of the number of clusters in the data

12 http://web.mit.edu/be.400/www/SVD/Singular_Value_Decomposition.htm 1-page tutorial on singular value decomposition (PCA)

13 Cluster analysis  Function  Places genes with similar expression patterns in groups.  Sometimes genes of unknown function will be grouped with genes of known function.  The functions that are known allow the investigator to hypothesize regarding the functions of genes not yet characterized.  Examples:  Identify genes important in cell cycle regulation  Identify genes that participate in a biosynthetic pathway  Identify genes involved in a drug response  Identify genes involved in a disease response

14 Clustering yeast cell cycle dataset VS gene tree ordering

15 How to choose the number of clusters needed to informatively partition the data Trial and error: Try clustering with a different number of clusters, and compare your results  Criteria for comparison: Homogeneity vs Separation  Use PCA (Principle Component Analysis) to visually determine how well the algorithm grouped genes  Calculate the mean distance between all genes within a cluster (it should be small) and compare that to the distance between clusters (which should be large)

16 Mathematical evaluation of clustering solution Merits of a ‘good’ clustering solution:  Homogeneity:  Genes inside a cluster are highly similar to each other.  Average similarity between a gene and the center (average profile) of its cluster.  Separation:  Genes from different clusters have low similarity to each other.  Weighted average similarity between centers of clusters.  These are conflicting features: increasing the number of clusters tends to improve with-in cluster Homogeneity on the expense of between-cluster Separation

17 “True” CAST* GeneCluster K-means CLICK Homogeneity Separation Performance on Yeast Cell Cycle Data *Ben-Dor, Shamir, Yakhini 1999 698 genes, 72 conditions (Spellman et al. 1998). Each algorithm was run by its authors in a “blind” test.

18 Clustering Algorithms  K–means  SOMs  Hierarchical clustering

19 K-MEANS 1. The user sets the number of clusters- k 2. Initialization: each gene is randomly assigned to one of the k clusters 3. Average expression vector is calculated for each cluster (cluster’s profile) 4. Iterate over the genes: For each gene- compute its similarity to the cluster profiles. Move the gene to the cluster it is most similar to. Recalculated cluster profiles. 5. Score current partition: sum of distances between genes and the profile of the cluster they are assigned to (homogeneity of the solution). 6. Stop criteria: further shuffling of genes results in minor improvement in the clustering score

20

21

22 K-MEANS example: 4 clusters (too many?) Mean profile Standard deviation in each condition

23 Evaluating Kmeans Cluster 3 Cluster 1 Cluster 4 Cluster 2 Mis- classified

24 K-means example: 3 clusters (looks right)

25 Kmeans clustering: K=2 (too few)

26 SOMs (Self-Organizing Maps) less clustering and more data organizing map nodes  User sets the number of clusters in a form of a rectangular grid (e.g., 3x2) – ‘map nodes’  Imagine genes as points in (M- dimensional) space  Initialization: map nodes are randomly placed in the data space

27 Genes – data points Clusters – map nodes

28 SOM - Scheme Randomly choose a data point (gene). Find its closest map node Move this map node towards the data point Move the neighbor map nodes towards this point, but to lesser extent (thinner arrows show weaker shift) Iterate over data points

29 Each successive gene profile (black dot) has less of an influence on the displacement of the nodes. Iterate through all profiles several times (10-100) When positions of the cluster nodes have stabilized, assign each gene to its closest map node (cluster)

30

31

32 Hierarchical Clustering  Goal#1: Organize the genes in a structure of a hierarchical tree  1) Initial step: each gene is regarded as a cluster with one item  2) Find the 2 most similar clusters and merge them into a common node (red dot)  3) Merge successive nodes until all genes are contained in a single cluster  Goal#2: Collapse branches to group genes into distinct clusters g1g2g3g4g5 {1,2} {4,5} {1,2,3} {1,2,3,4,5}

33 Which genes to cluster?  Apply filtering prior to clustering – focus the analysis on the ‘responding genes’  The application of controlled statistical tests to identify ‘responding genes’ usually ends up with too few genes that do not allow for a global characterization of the response.  Variance: filter out genes that do not vary greatly among the conditions of the experiment.  Non-varying genes skew clustering results, especially when using a correlation coefficient  Fold change: choose genes that change by at least M-fold in at least L conditions.

34 Clustering – Tools  Cluster (Eisen) – hierarchical clustering  http://rana.lbl.gov/EisenSoftware.htm  GeneCluster (Tamayo) – SOM  http://bioinfo.cnio.es/wwwsomtree/  TIGR MeV – K-Means, SOM, hierarchical, QTC, CAST  http://www.tm4.org/mev.html  Expander – CLICK, SOM, K-means, hierarchical  http://www.cs.tau.ac.il/~rshamir/expander/expander.htm l  Many others (e.g. GeneSpring)  http://www.agilent.com/chem/genespring

35 1)Transform Dataset Using PCA 2)Cluster Parameters to test: Distance Metric Number of clusters Separation & Homogeneity 3)Assign biological meaning to clusters Analysis Strategy

36 Original presentation created by Rani Elkon and posted at: http://www.tau.ac.il/lifesci/bioinfo/teaching/2002- 2003/DNA_microarray_winter_2003.html


Download ppt "SocalBSI 2008: Clustering Microarray Datasets Sagar Damle, Ph.D. Candidate, Caltech  Distance Metrics: Measuring similarity using the Euclidean and Correlation."

Similar presentations


Ads by Google