Presentation is loading. Please wait.

Presentation is loading. Please wait.

09/05/2005 סמינריון במתמטיקה ביולוגית Dimension Reduction - PCA Principle Component Analysis.

Similar presentations


Presentation on theme: "09/05/2005 סמינריון במתמטיקה ביולוגית Dimension Reduction - PCA Principle Component Analysis."— Presentation transcript:

1

2 09/05/2005 סמינריון במתמטיקה ביולוגית Dimension Reduction - PCA Principle Component Analysis

3 סמינריון במתמטיקה ביולוגית The Goals Reduce the number of dimensions of a data set.  Capture the maximum information present in the initial data set.  Minimize the error between the original data set and the reduced dimensional data set. Simpler visualization of complex data.

4 סמינריון במתמטיקה ביולוגית The Algorithm Step 1: Calculate the Covariance Matrix of the observation matrix. Step 2: Calculate the eigenvalues and the corresponding eigenvectors. Step 3: Sort eigenvectors by the magnitude of their eigenvalues. Step 4: Project the data points on those vectors.

5 סמינריון במתמטיקה ביולוגית The Algorithm Step 1: Calculate the Covariance Matrix of the observation matrix. Step 2: Calculate the eigenvalues and the corresponding eigenvectors. Step 3: Sort eigenvectors by the magnitude of their eigenvalues. Step 4: Project the data points on those vectors.

6 סמינריון במתמטיקה ביולוגית PCA – Step 1: Covariance Matrix C  - Data Matrix

7 סמינריון במתמטיקה ביולוגית Covariance Matrix - Example

8 סמינריון במתמטיקה ביולוגית The Algorithm Step 1: Calculate the Covariance Matrix of the observation matrix. Step 2: Calculate the eigenvalues and the corresponding eigenvectors. Step 3: Sort eigenvectors by the magnitude of their eigenvalues. Step 4: Project the data points on those vectors.

9 סמינריון במתמטיקה ביולוגית Linear Algebra Review – Eigenvalue and Eigenvector C - a square n  n matrix Example eigenvector eigenvalue

10 סמינריון במתמטיקה ביולוגית Singular Value Decomposition

11 סמינריון במתמטיקה ביולוגית SVD Example Let us find SVD for the matrix 1) First, compute X T X: 2) Second, find the eigenvalues of X T X and the corresponding eigenvectors: ( use the following formula - )

12 סמינריון במתמטיקה ביולוגית

13 SVD Example - Continue 3) Now, we obtain the U and Σ : 4) And the decomposition C=UΣV T :

14 סמינריון במתמטיקה ביולוגית The Algorithm Step 1: Calculate the Covariance Matrix of the observation matrix. Step 2: Calculate the eigenvalues and the corresponding eigenvectors. Step 3: Sort eigenvectors by the magnitude of their eigenvalues. Step 4: Project the data points on those vectors.

15 סמינריון במתמטיקה ביולוגית PCA – Step 3 Sort eigenvectors by the magnitude of their eigenvalues

16 סמינריון במתמטיקה ביולוגית The Algorithm Step 1: Calculate the Covariance Matrix of the observation matrix. Step 2: Calculate the eigenvalues and the corresponding eigenvectors. Step 3: Sort eigenvectors by the magnitude of their eigenvalues. Step 4: Project the data points on those vectors.

17 סמינריון במתמטיקה ביולוגית PCA – Step 4 Project the input data onto the principal components. The new data values are generated for each observation, which are a linear combination as follows:  score  observation  principal component  loading (-1 to 1)  variable

18 סמינריון במתמטיקה ביולוגית PCA - Fundamentals 1st PC 2nd PC Projections X1X1 X2X2 X3X3 The first PC is the eigenvector with the greatest eigenvalue for the covariance matrix of the dataset. The Eigenvalues are also the variances of the observations in each of the new coordinate axes Var(PC1) Var(PC2)

19 סמינריון במתמטיקה ביולוגית PCA: Scores x1x1 x2x2 x3x3 Obs. i 1st PC 2nd PC  The scores are the places along the component lines where the observations are projected.

20 סמינריון במתמטיקה ביולוגית PCA: Loadings x1x1 x2x2 x3x3  The loadings b pc,k (dimension a, variable k) indicate the importance of the variable k to the given dimension.  b pc,k is the direction cosine (cos a) of the given component line vs. the x k coordinate axis. 11 x1x1 x2x2 x3x3 22 33 1st PC Cos(  X/PC

21 סמינריון במתמטיקה ביולוגית PCA - Summary Multivariate projection technique. Reduce dimensionality of data by transforming correlated variables into a smaller number of uncorrelated components. Graphical overview. Plot data in K-Dimensional space. Directions of maximum variation. Best preserves the variance as measured in the high- dimensional input space. Projection of data onto lower dimensional planes.

22 09/05/2005 סמינריון במתמטיקה ביולוגית Biological Background

23 סמינריון במתמטיקה ביולוגית Reverse Transcriptase c

24 סמינריון במתמטיקה ביולוגית Areas Being Studied With Microarrays To compare the expression of a protein (gene) between two or more tissues. To check whether a protein appears in a specific tissue. To find the difference in gene expression between a normal and a cancerous tissue.

25 סמינריון במתמטיקה ביולוגית cDNA Microarray Experiments Different tissues, same organism (brain v. liver). Same tissue, different organisms. Same tissue, same organism (tumour v. non-tumour). Time course experiments.

26 סמינריון במתמטיקה ביולוגית Microarray Technology Method for measuring levels of expression of thousands of genes simultaneously. There are two types of arrays:  cDNA and long oligonucleotide arrays.  Short oligonucleotide arrays. Each probe is ~25 nucleotide long. 16-20 probes for each gene.

27 סמינריון במתמטיקה ביולוגית The Idea Target: cDNA (variables to be detected) Probe: oligos/cDNA (gene templates) + Hybridization

28 סמינריון במתמטיקה ביולוגית Brief Outline of Steps for Producing a Microarray Produce mRNA Hybridise  Complimentary sequence will bind  Fluorescence shows binding Scan array ( Extraction of intensities with picture analysis software)

29 סמינריון במתמטיקה ביולוגית Hybridization RNA is cloned to cDNA with reverse transcriptase. The cDNA is labeled.  Fluorescent labeling is most common, but radioactive labeling is also used.  Labeling may be incorporated in hybridization, or applied afterwards. Then the labeled samples are hybridized to the microarrays.

30 סמינריון במתמטיקה ביולוגית

31 Gene Expression Database – a Conceptual View Gene expression levels Gene expression matrix Genes Gene annotations Samples Samples annotations

32 09/05/2005 סמינריון במתמטיקה ביולוגית The Article

33 סמינריון במתמטיקה ביולוגית The Biological Problem The very high dimensional space of gene expression measurements obtained by DNA micro arrays impedes the detection of underlying patterns in gene expression data and the identification of discriminatory genes.

34 סמינריון במתמטיקה ביולוגית Why to Use PCA? To obtain a direct link between patterns in gene and patterns in samples. Sample annotations Gene annotations

35 סמינריון במתמטיקה ביולוגית The Paper Shows: Distinct patterns are obtained when the genes are projected an a two-dimensional plane. After the removal of irrelevant genes, the scores on the new space showed distinct tissue patterns.

36 סמינריון במתמטיקה ביולוגית The Data Used in Experiment Oligonucleotide microarray measurements of 7070 genes made in 40 normal human tissue samples. The tissues they used were from brain, kidney, liver, lung, esophagus, skeletal muscle, breast, stomach, colon, blood, spleen, prostate, testes, vulva, proliferative endometrium, myometrium, placenta, cervix, and ovary.

37 סמינריון במתמטיקה ביולוגית Results PCA Loadings Can Be Used to Filter Irrelevant Genes TThe data from 40 human tissues were first projected using PCA. TThe first and second PCs account for ∼ 70% of the information present in the entire data set.

38 Gene Selection Based on the Loadings on the Principal Components Graph A shows the score plot of the samples before any filtering is implemented. Score Plot of the Tissue Samples Scores on Principle Component 1 Scores on Principle Component 2

39 סמינריון במתמטיקה ביולוגית Graphs B shows the loading plot of the genes before any filtering is implemented. Loadings on Principle Component 1 Loadings on Principle Component 2 Loading Plot of the Genes

40 סמינריון במתמטיקה ביולוגית The Filter on Loadings Graph E displays quantitatively the decisions that went into the choice of the filtering threshold. It displays the distortion in the observed patterns, as measured through the squared difference, and the number of genes retained for analysis as the threshold is varied. Squared Difference Threshold Number of genes

41 סמינריון במתמטיקה ביולוגית The Filter on the Loadings - Continue The chosen filter threshold was 0.001. Filtering reduced the number of genes from 7070 to 425. Squared Difference Threshold Number of genes

42 סמינריון במתמטיקה ביולוגית Graphs C show the score plot after the filtering. Scores on Principle Component 1 Score Plot of the Tissue Samples Scores on Principle Component 2

43 סמינריון במתמטיקה ביולוגית Graphs D show the loading plot after the filtering. Loadings on Principle Component 1 Loadings on Principle Component 2 Loading Plot of the Genes

44 סמינריון במתמטיקה ביולוגית Scores on Principle Component 1 Score Plot of the Tissue Samples Scores on Principle Component 2 Score Plot of the Tissue Samples Scores on Principle Component 1 Scores on Principle Component 2 Compare.. Dramatic reduction from the initial 7070 genes to the 425, finally retained, resulted in a minimal information loss relevant to the description of the samples in the reduced space.

45 סמינריון במתמטיקה ביולוגית Loadings on Principle Component 1 Loadings on Principle Component 2 Loading Plot of the Genes Loadings on Principle Component 1 Loadings on Principle Component 2 Loading Plot of the Genes Compare.. Three linear structures can be identified in the loading plot of the 425 genes selected by the above analysis. Each structure comprising a set of genes.

46 סמינריון במתמטיקה ביולוגית PCA – Discussion PCA has strong, yet flexible, mathematical structure. PCA simplifies the “views” of the data. Reduces dimensionality of gene expression space. The correspondence between the score plot and the loading plot enables the elimination of redundant variables. PCA allowed the classification of new samples belonging to the used types of tissues.

47 סמינריון במתמטיקה ביולוגית PCA – Discussion (Cont.) In the article this method facilitated the identification of strong underlying structures in the data. The identification of such structures is uniquely dependent on the data and is not generally guaranteed. No “correct” way of classification, “biological understanding” is the ultimate guide.

48 סמינריון במתמטיקה ביולוגית My Critics Positives  Can deal with large data sets.  There weren’t done any assumptions on the data. This method is general and may be applied to any data set. Negatives  Nonlinear structure is invisible to PCA  The meaning of features is lost when linear combinations are formed

49 סמינריון במתמטיקה ביולוגית True covariance matrices are usually not known, estimated from data. The Graph :  First component will be chosen along the largest variance line => both clusters will strongly overlap.  Projection to orthogonal axis to the first PCA component will give much more discriminating power.

50 סמינריון במתמטיקה ביולוגית Thank you !!!


Download ppt "09/05/2005 סמינריון במתמטיקה ביולוגית Dimension Reduction - PCA Principle Component Analysis."

Similar presentations


Ads by Google