Presentation is loading. Please wait.

Presentation is loading. Please wait.

Principal Components Analysis (PCA). a technique for finding patterns in data of high dimension.

Similar presentations


Presentation on theme: "Principal Components Analysis (PCA). a technique for finding patterns in data of high dimension."— Presentation transcript:

1 Principal Components Analysis (PCA)

2 a technique for finding patterns in data of high dimension

3 Outline: 1.Eigenvectors and eigenvalues 2.PCA: a)Getting the data b)Centering the data c)Obtaining the covariance matrix d)Performing an eigenvalue decomposition of the covariance matrix e)Choosing components and forming a feature vector f)Deriving the new data set

4 Outline: 1.Eigenvectors and eigenvalues 2.PCA: a)Getting the data b)Centering the data c)Obtaining the covariance matrix d)Performing an eigenvalue decomposition of the covariance matrix e)Choosing components and forming a feature vector f)Deriving the new data set

5 Eigenvectors & eigenvalues: 23 21 23 21

6 23 21 1 3 * 23 21

7 23 21 1 3 11 5 *= 23 21

8 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 *

9 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *=

10 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2

11 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2

12 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector

13 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

14 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

15 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

16 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

17 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

18 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

19 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

20 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

21 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

22 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 3 2 12 8 *= = 4 * 3 2 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

23 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 6 4 *= not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue = 4 *

24 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 6 4 24 16 *= not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue = 4 *

25 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= 23 21 6 4 24 16 *= = 4 * 6 4 not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue

26 Eigenvectors & eigenvalues: 23 21 1 3 11 5 *= not an integer multiple of the original vector 4 times the original vector 12 2 (3,2) 8 3 (12,8) transformation matrix eigenvector (this vector and all multiples of it) eigenvalue 23 21 6 4 24 16 *= = 4 * 6 4

27 Outline: 1.Eigenvectors and eigenvalues 2.PCA: a)Getting the data b)Centering the data c)Obtaining the covariance matrix d)Performing an eigenvalue decomposition of the covariance matrix e)Choosing components and forming a feature vector f)Deriving the new data set

28 Outline: 1.Eigenvectors and eigenvalues 2.PCA: a)Getting the data b)Centering the data c)Obtaining the covariance matrix d)Performing an eigenvalue decomposition of the covariance matrix e)Choosing components and forming a feature vector f)Deriving the new data set

29 PCA - a technique for identifying patterns in data and expressing the data in such a way as to highlight their similarities and differences - once these patterns are found, the data can be compressed (i.e. the number of dimensions can be reduced) without much loss of information

30 PCA - a technique for identifying patterns in data and expressing the data in such a way as to highlight their similarities and differences - once these patterns are found, the data can be compressed (i.e. the number of dimensions can be reduced) without much loss of information - example: data of 2 dimensions: xy -0.70.2 2.12.7 1.72.3 1.40.8 1.92 1.81 0.4-0.4 1.10.3 0.90.4 -0.60.7 original data:

31 PCA - a technique for identifying patterns in data and expressing the data in such a way as to highlight their similarities and differences - once these patterns are found, the data can be compressed (i.e. the number of dimensions can be reduced) without much loss of information - example: data of 2 dimensions: xy -0.70.2 2.12.7 1.72.3 1.40.8 1.92 1.81 0.4-0.4 1.10.3 0.90.4 -0.60.7 xy -1.7-0.8 1.11.7 0.71.3 0.4-0.2 0.91 0.80 -0.6-1.4 0.1-0.7 -0.1-0.6 -1.6-0.3 original data:centered data:

32 PCA - a technique for identifying patterns in data and expressing the data in such a way as to highlight their similarities and differences - once these patterns are found, the data can be compressed (i.e. the number of dimensions can be reduced) without much loss of information - example: data of 2 dimensions: xy -0.70.2 2.12.7 1.72.3 1.40.8 1.92 1.81 0.4-0.4 1.10.3 0.90.4 -0.60.7 xy -1.7-0.8 1.11.7 0.71.3 0.4-0.2 0.91 0.80 -0.6-1.4 0.1-0.7 -0.1-0.6 -1.6-0.3 original data:centered data:

33 Covariance matrix: xy x1.0155560.696667 y 1.017778 Eigenvalues of the covariance matrix: Eigenvectors of the covariance matrix: 1.7133342 0.3199991 0.706543-0.70767 0.707670.706543 * unit eigenvectors xy -0.70.2 2.12.7 1.72.3 1.40.8 1.92 1.81 0.4-0.4 1.10.3 0.90.4 -0.60.7 xy -1.7-0.8 1.11.7 0.71.3 0.4-0.2 0.91 0.80 -0.6-1.4 0.1-0.7 -0.1-0.6 -1.6-0.3 original data:centered data:

34 Covariance matrix: xy x1.0155560.696667 y 1.017778 Eigenvalues of the covariance matrix: Eigenvectors of the covariance matrix: 1.7133342 0.3199991 0.706543-0.70767 0.707670.706543 * unit eigenvectors xy -0.70.2 2.12.7 1.72.3 1.40.8 1.92 1.81 0.4-0.4 1.10.3 0.90.4 -0.60.7 xy -1.7-0.8 1.11.7 0.71.3 0.4-0.2 0.91 0.80 -0.6-1.4 0.1-0.7 -0.1-0.6 -1.6-0.3 original data:centered data:

35 Compression and reduced dimensionality: - the eigenvector associated with the highest eigenvalue is the principal component of the dataset; it captures the most significant relationship between the data dimensions - ordering eigenvalues from highest to lowest gives the components in order of significance - if we want, we can ignore the components of lesser significance – this results in a loss of information, but if the eigenvalues are small, the loss will not be great - thus, in a dataset with n dimensions/variables, one may obtain the n eigenvectors & eigenvalues and decide to retain p of them → this results in a final dataset with only p dimensions - feature vector – a vector containing only the eigenvectors representing the dimensions we want to keep – in our example data, 2 choices: - the final dataset is obtained by multiplying the transpose of the feature vector (on the left) with the transposed original dataset - this will give us the original data solely in terms of the vectors we chose 0.706543-0.70767 0.707670.706543 or 0.706543 0.70767

36 Our example: a)retain both eigenvectors: b)retain only the first eigenvector / principal component: 0.7065430.70767 -0.707670.706543 0.70767 * data t = new data 2x22x10 * data t = new data 1x22x101x10 Original data, rotated so that the eigenvectors are the axes - no loss of information.

37 Our example: a)retain both eigenvectors: b)retain only the first eigenvector / principal component: 0.7065430.70767 -0.707670.706543 0.70767 * data t = new data 2x22x10 * data t = new data 1x22x101x10 Original data, rotated so that the eigenvectors are the axes - no loss of information. Only 1 dimension left – we threw away the other axis. Thus: we transformed the data so that is expressed in terms of the patterns, where the patterns are the lines that most closely describe the relationships between the data. Now, the values of the data points tell us exactly where (i.e., above/below) the trend lines the data point sits.

38 - similar to Cholesky decomposition as one can use both to fully decompose the original covariance matrix - in addition, both produce uncorrelated factors Eigenvalue decomposition: Cholesky decomposition: Eigenvalue: decomposition C = E V E -1 Cholesky decomposition: C = Λ Ψ Λ t c 11 c 21 c 22 * = v1 * e 11 e 21 e 11 e 21 c 11 c 21 c 22 = e 11 e 12 e 21 e 22 v 11 v 22 e 11 e 12 e 21 e 22 c 11 c 21 c 22 = l 11 l 21 l 22 va 11 va 22 l 11 l 21 l 22 ch 1 ch 2 var 1 var 2 va 1 va 2 l 11 l 22 l 21 pc 1 pc 2 var 1 var 2 v 11 v 22 e 11 e 22 e 21 e 12

39 Thank you for your attention.


Download ppt "Principal Components Analysis (PCA). a technique for finding patterns in data of high dimension."

Similar presentations


Ads by Google