A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column vectors Handling biological data is most easily done with a matrix approach. An Excel worksheet is a matrix. Short recapitulation of matrix basics
The first subscript denotes rows, the second columns. n and m define the dimension of a matrix. A has m rows and n columns. Column vector Row vector The symmetric matrix is a matrix where A n,m = A m,n. The diagonal matrix is a square and symmetrical. Unit matrix I is a matrix with one row and one column. It is a scalar (ordinary number).
For a non-singular square matrix the inverse is defined as r 2 =2r 1 r 3 =2r 1 +r 2 Singular matrices are those where some rows or columns can be expressed by a linear combination of others. Such columns or rows do not contain additional information. They are redundant. A linear combination of vectors A matrix is singular if it’s determinant is zero. Det A: determinant of A A matrix is singular if at least one of the parameters k is not zero. (AB) -1 = B -1 A -1 ≠ A -1 B -1 Determinant The inverse of a 2x2 matrix
Addition and subtraction Scalar product The inner or dot product Basic rule of matrix multiplication
Identity matrix Only possible if A is not singular. If A is singular the system has no solution. The general solution of a linear system Systems with a unique solution The number of independent equations equals the number of unknowns. X: Not singular
SpeciesAspilota sp2Aspilota sp Aspilota sp2Aspilota sp5 DNDNN-N 2 DNN-N Transpose XTXXTX (X T X) E-05 XTYXTY rK r/K rK r/K Aspilota sp2 Aspilota sp5 Both species have low reproductive rate r. They are prone to fast extinction. The general solution of a linear system
Orthogonal vectors The dot product of two orthogonal vectors is zero. If the orthogonal vectors have unity length they are called orthonormal. A system of n orthogonal vectors spans an n-dimensional hypervolume (a Cartesian system) In ecological modelling orthogonal vectors are of particular importance. They define linearly independent variables. Orthogonal matrix Multiplying an orthogonal matrix with its transpose gives the identity matrix. The transpose of an orthogonal system is identical to its inverse. d=1 Y=sin( ) X=cos( )
X Y How to transform vector A into vector B? A B Multiplication of a vector with a square matrix defines a new vector that points to a different direction. The matrix defines a transformation in space X Y A B Image transformation X contains all the information necesssary to transform the image The vectors that don’t change during transformation are the eigenvectors. In general we define U is the eigenvector and the eigenvalue of the square matrix X Eigenvalues and eigenvectors
A matrix with n columns has n eigenvalues and n eigenvectors.
Some properties of eigenvectors If is the diagonal matrix of eigenvalues: The product of all eigenvalues equals the determinant of a matrix. The determinant is zero if at least one of the eigenvalues is zero. In this case the matrix is singular. The eigenvectors of symmetric matrices are orthogonal Eigenvectors do not change after a matrix is multiplied by a scalar k. Eigenvalues are also multiplied by k. If A is trianagular or diagonal the eigenvalues of A are the diagonal entries of A.
ABCDE A12345 B21432 C34134 D43314 E52441 A B C D E Eigenvalues of M ABCDE A B C D E ABCDE A11.39E E E E-15 B1.39E E E E-16 C6.11E E E-16-5E-16 D3.89E E E E-16 E1.22E E-16-5E E-161 Matrix M Eigenvectors U of M UTUUTU U T U = I The largest eigenvalue is associated with the left (dominant) eigenvector
XY XY X Y 1 Correlation matrix Eigenvalues EV1EV Xmean Ymean 1 2 The eigenvectors define the major axes of the data. The eigenvalues define the length of the eigenvalues A geometrical interpretation of eigenvalues
XY X Y 1 Correlation matrix Eigenvalues The eigenvalues of a correlation similarity matrix are linearly linked to the coefficients of correlation. Xmean The eigenvector ellipse
Eigenvectors and information content A matrix is a data base that contains an amount of information. Left and right sides of an equation contain the same amount of information The eigenvectors take over the information content of the data base (the matrix) The eigenvalues define ow much information contains each eigenvector. The eigenvalue is a measure of correlation. The squared eigenvalue is therefore a measure of the variance explained by the associated eigenvector. The eigenvector of the largest eigenvalue is called the dominant eigenvector and contains the largest part of information of the associated data base.