Presentation is loading. Please wait.

Presentation is loading. Please wait.

es/by-sa/2.0/. Principal Component Analysis & Clustering Prof:Rui Alves 973702406 Dept Ciencies Mediques.

Similar presentations


Presentation on theme: "es/by-sa/2.0/. Principal Component Analysis & Clustering Prof:Rui Alves 973702406 Dept Ciencies Mediques."— Presentation transcript:

1 http://creativecommons.org/licens es/by-sa/2.0/

2 Principal Component Analysis & Clustering Prof:Rui Alves ralves@cmb.udl.es 973702406 Dept Ciencies Mediques Basiques, 1st Floor, Room 1.08 Website of the Course:http://web.udl.es/usuaris/pg193845/Courses/Bioinformatics_2007/ Course: http://10.100.14.36/Student_Server/

3 Complex Datasets When studying complex biological samples sometimes there are to many variables For example, when studying Medaka development using Phospho metabolomics you may have measurements of different amino acids, etc. etc. Question: Can we find markers of development using these metabolites? Question: How do we analyze the data?

4 Problems How do you visually represent the data? –The sample has many dimensions, so plots are not a good solution How do you make sense or extract information from it? –With so many variables how do you know which ones are important for identifying signatures

5 Two possible ways (out of many) to address the problems PCA Clustering

6 Solution 1: Try data reduction method If we can combine the different columns in specific ways, then maybe we can find a way to reduce the number of variables that we need to represent and analyze: –Principal Component Analysis

7 Variation in data is what identifies signatures Metabolite 1Metabolite 2Metabolite 3… Condition C1 0.0130.1 Condition C2 0.020.015 Condition C3 0.0150.81.3

8 Variation in data is what identifies signatures Virtual Metabolite: Metabolite 2+ 1/Metabolite 3 Signal Much strong and separates conditions 1, 2, and 3 V. Metab. C2C3C1 020

9 Principal component analysis From k “old” variables define k “new” variables that are a linear combination of the old variables: y 1 = a 11 x 1 + a 12 x 2 +... + a 1k x k y 2 = a 21 x 1 + a 22 x 2 +... + a 2k x k... y k = a k1 x 1 + a k2 x 2 +... + a kk x k New vars Old vars

10 Defining the New Variables Y yk's are uncorrelated (orthogonal) y1 explains as much as possible of original variance in data set y2 explains as much as possible of remaining variance etc.

11 Principal Components Analysis on: Covariance Matrix: –Variables must be in same units –Emphasizes variables with most variance –Mean eigenvalue ≠1.0 Correlation Matrix: –Variables are standardized (mean 0.0, SD 1.0) –Variables can be in different units –All variables have same impact on analysis –Mean eigenvalue = 1.0

12 Covariance Matrix covariance is the measure of how much two random variables vary together X1X2X3… X1 1212 0.030.05… X2… 2222 3… X3 … …… 3232 …

13 Covariance Matrix X1X2X3… X1 1212 0.030.05… X2… 2222 3… X3 … …… 3232 … Diagonalize matrix

14 Eigenvalues are the principal components Tells us how much each PC contributes to a data point

15 Principal Components are Eigenvalues λ1λ1 λ2λ2 1st Principal Component, y 1 2nd Principal Component, y 2

16 Now we have reduced problem to two variables Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Day 8

17 What if things are still a mess? Days 3, 4, 5 and 6 do not separate very well What could we do to try and improve this? Maybe add and extra PC axis to the plot!!!

18 Days separate well with three variables Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Day 8

19 Two possible ways to address the problems PCA Clustering

20 Complex Datasets

21 Solution 2: Try using all data and representing it in a low dimensional figure If we can cluster the different days according to some distance function between all amino acids, we can represent the data in an intuitive way.

22 What is data clustering? Clustering is the classification of objects into different groups, or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait The number of cluster is usually predefined in advance

23 Types of data clustering Hierarchical –Find successive clusters using previously established clusters Agglomerative algorithms begin with each element as a separate cluster and merge them into successively larger clusters Divisive algorithms begin with the whole set and proceed to divide it into successively smaller clusters Partitional –Find all clusters at once

24 First things first: distance is important Selecting a distance measure will determine how data is agglomerated

25 Reducing the data and finding amino acid signatures in development Decide on number of clusters: Three clusters Do you PCA of the dataset (20 variables, 35 datapoints) Use Euclidean Distance Use a Hierarchical, Divisive Algorithm

26 Hierarchical, Divisive Clustering: Step 1 – One Cluster Consider all data points as a member of cluster

27 Hierarchical, Divisive Clustering: Step 1.1 – Building the Second Cluster centroid Furthest point from centroid New seed cluster

28 Hierarchical, Divisive Clustering: Step 1.1 – Building the Second Cluster Recalculate centroid Add new point further from old centroid, closer to new Rinse and repeat until…

29 Hierarchical, Divisive Clustering: Step 1.2 – Finishing a Cluster Recalculate centroids: if both centroids become closer, do not add point & stop adding to cluster Add new point further from old centroid, closer to new

30 Hierarchical, Divisive Clustering: Step 2 – Two Clusters Use optimization algorithm to divide datapoints in such a way that Euc. Dist. Between all point within each of two clusters is minimal

31 Hierarchical, Divisive Clustering: Step 3 – Three Clusters Continue dividing datapoints until all clusters have been defined

32 Reducing the data and finding amino acid signatures in development Decide on number of clusters: Three clusters Use Euclidean Distance Use a Hierarchical, Aglomerative Algorithm

33 Hierarchical, Aglomerative Clustering: Step 1 – 35 Clusters Consider each data point as a cluster

34 Hierarchical, Aglomerative Clustering: Step 2 – Decreasing the number of clusters Search for the two datapoint that are closest to each other Colapse them into a cluster Repeat until you have only three clusters

35 Reducing the data and finding amino acid signatures in development Decide on number of clusters: Three clusters Use Euclidean Distance Use a Partitional Algorithm

36 Partitional Clustering Search for the three datapoint that are farthest from each other Add points to each of these, according to shortest distance Repeat until all points have been partitioned to a cluster

37 Clustering the days of development with amino acid signatures Get your data matrix Use Euclidean Distance Use a Clustering Algorithm

38 Final Notes on Clustering If more than three PC are needed to separate the data we could have used the Principal components matrix and cluster from there Clustering can be fuzzy Using algorithms such as genetic algorithms, neural networks of bayesian networks one can extract clusters that are completly non- obvious

39 Summary PCA allows for data reduction and decreases dimensions of the datasets to be analyzed Clustering allows for classification (independent of PCA) and allows for good visual representations


Download ppt "es/by-sa/2.0/. Principal Component Analysis & Clustering Prof:Rui Alves 973702406 Dept Ciencies Mediques."

Similar presentations


Ads by Google