Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tables, Figures, and Equations

Similar presentations


Presentation on theme: "Tables, Figures, and Equations"— Presentation transcript:

1 Tables, Figures, and Equations
From: McCune, B. & J. B. Grace Analysis of Ecological Communities. MjM Software Design, Gleneden Beach, Oregon

2 Figure Comparison of the line of best fit (first principal component) with regression lines. Point C is the centroid (from Pearson 1901).

3 Figure 14. 2. The basis for evaluating “best fit” (Pearson 1901)
Figure The basis for evaluating “best fit” (Pearson 1901). In contrast, least-squares best fit is evaluated from vertical differences between points and the regression line.

4 Figure 14.3. Outliers can strongly influence correlation coefficients.

5 Step by step 1. From a data matrix A containing n sample units by p variables, calculate a cross-products matrix: The dimensions of S are p rows  p columns.

6 Step by step 1. From a data matrix A containing n sample units by p variables, calculate a cross-products matrix: The dimensions of S are p rows  p columns. The equation for a correlation matrix is the same as above except that each difference is divided by the standard deviation, sj.

7 │S - lI│ = 0 2. Find the eigenvalues.
Each eigenvalue (= latent root) is a lambda (l) that solves: │S - lI│ = 0 I is the identity matrix (App. 1). This is the "characteristic equation."

8 The coefficients in the polynomial are derived by expanding the determinant:

9 [S - lI]y = 0 Then find the eigenvectors, Y.
For every eigenvalue li there is a vector y of length p, known as the eigenvector. Each eigenvector contains the coefficients of the linear equation for a given component (or axis). Collectively, these vectors form a p  p matrix, Y. To find the eigenvectors, we solve p equations with p unknowns: [S - lI]y = 0

10 4. Then find the scores for each case (or object) on each axis: Scores are the original data matrix post-multiplied by the matrix of eigenvectors: X = B Y n  p n  p p  p Y is the matrix of eigenvectors B is the original data matrix X is the matrix of scores on each axis (component)

11 For eigenvector 1 and entity i ...
Score1 xi= y1ai1 + y2ai ypaip This yields a linear equation for each dimension.

12 5. Calculate the loading matrix.
The p  k matrix of correlations between each variable and each component is often called the principal components loading matrix. These correlations can be derived by rescaling the eigenvectors or they can be calculated as correlation coefficients between each variable and scores on the components.

13 Geometric analog 1. Start with a cloud of n points in a p-dimensional space. 2. Center the axes in the point cloud (origin at centroid) 3. Rotate axes to maximize the variance along the axes. As the angle of rotation (q) changes, the variance (s2) changes. Variance along axis v = s2v = y' S y 1p pp p1

14 At the maximum variance, all partial derivatives will be zero (no slope in all dimensions). This is another way of saying that we find the angle of rotation q such that: for each component (the lower case delta () indicates a partial derivative).

15 Figure 14.4. PCA rotates the point cloud to maximize the variance along the axes.

16 Figure Variance along an axis is maximized when the axis skewers the longest dimension of the point cloud. The axes are formed by the variables (attributes of the objects).

17 Example calculations, PCA
Start with a 2  2 correlation matrix, S, that we calculated from a data matrix of n  p items where p = 2 in this case:

18 We need to solve for the eigenvalues, l, by solving the characteristic equation:
Substituting our correlation matrix, S:

19 Now expand the determinant:

20 We then solve this polynomial for the values of l that will satisfy this equation.
Since a = 1, b = -2, and c = 0.84, then Solving for the two roots gives us l1 = 1.4 and l2 = 0.6.

21

22 Now find the eigenvectors, Y. For each l there is a y:
For the first root we substitute l1 = 1.4, giving:

23 Multiplying this out gives two equations with two unknowns:
Solve these simultaneous equations (y1 = 1 and y2 = 1). Setting up and solving the equations for the second eigenvector yields y1 = 1 and y2 = -1.

24 We now normalize the eigenvectors, rescaling them so that the sum of squares = 1 for each eigenvector. In other words, the eigenvectors are scaled to unit length. The scaling factor k for each eigenvector i is So for the first eigenvector,

25 Then multiply this scaling factor by all of the items in the eigenvector:
The same procedure is repeated for the second eigenvector, then the eigenvectors are multiplied by the original data matrix to yield the scores (X) for each of the entities on each of the axes (X = A Y).

26 The broken stick eigenvalue is
where p is the number of columns and j indexes axes k through p.

27 Addendum on randomization tests for PCA
(not in McCune & Grace 2002, but in PC-ORD version 5; evaluation based on Peres-Neto et al. (2005)) The randomization: shuffle values within variables (columns), then recompute the correlation matrix and eigenvalues. Repeat many times. Compare the actual eigenvalues in several ways with the eigenvalues from the randomizations. Calculate p value as: where n = number of randomizations where the test statistic ≥ observed value N = the total number of randomizations.

28 Rnd-Lambda – Compare eigenvalue for an axis to the observed eigenvalue for that axis.
fairly conservative and generally effective criterion more effective with uncorrelated variables included in the data, than Avg-Rnd performs better than other measures with strongly non-normal data. Rnd-F – Compare pseudo-F-ratio for an axis to the observed pseudo-F for that axis. Pseudo-F-ratio is the eigenvalue for an axis divided by the sum of the remaining (smaller) eigenvalues. particularly effective against uncorrelated variables performs poorly with grossly nonnormal error structures Avg-Rnd – Compare observed eigenvalue for a given axis to the average eigenvalue obtained for that axis after randomization good when the data did not contain uncorrelated variables. less stringent, too liberal when the data contain uncorrelated variables.


Download ppt "Tables, Figures, and Equations"

Similar presentations


Ads by Google