6determinant is a test for invertibility Properties:Exchange rows/cols of a matrix: reverse sign of detEquals to 0 if two rows/cols are same.Det of triangular matrix is a product of diagonal terms.determinant is a test for invertibilitymatrix is invertible if |A| ≠ 0matrix is singular if |A| = 0
8Action of matrix A on vector x returns the vector parallel (same direction) to vector x. x … eigenvectors, λ … eigenvaluesspectrum, trace (sum of λ), determinant (product of λ)λ = 0 … singular matrixFind eigendecomposition by solving characteristic equation (leading to characteristic polynomial)det(A - λI) = 0Find λ first, then find eigenvectors by Gauss elimination, as the solution of (A - λI)x = 0.
9algebraic multiplicity of an eigenvalue multiplicity of the corresponding rootgeometric multiplicity of an eigenvaluenumber of linearly independent eigenvectors with that eigenvaluedegenerate, deffective matrix
10DiagonalizationSquare matrix A … diagonalizable if there exists an invertible matrix P such that P −1AP is a diagonal matrix.Diagonalization … process of finding a corresponding diagonal matrix.Diagonalization of square matrix with independent eigenvectors meas its eigendecompositionS-1AS = Λ
11Which matrices are diagonalizable (i.e. eigendecomposed)? all λ’s are different (λ can be zero, the matrix is still diagonalizable. But it is singular.)if some λ’s are same, I must be careful, I must check the eigenvectors.If I have repeated eigenvalues, I may or may not have n independent eigenvectors.If the eigenvectors are independent, than it is diagonalizable (even if I have some repeating eigenvalues).Diagonalizability is concerned with eigenvectors. (independent)Invertibility is concerned with eigenvalues. (λ=0)
12symmetric matrix (n x n) diagonal matrixeigenvalues are on the diagonaleigenvectors are columns with 0 and 1triangular matrixeigenvalues are sitting along the diagonalsymmetric matrix (n x n)real eigenvalues, real eigenvectorsn independent eigenvectorsthey can be chosen such that they are orthonormal (A = QΛQT)orthogonal matrixeigenvalues are +1 or -1
13Symmetric matriceseigenvalues are real, eigenvectors are orthogonal (may be chosen to be orthogonal)Symmetric matrices with all the eigenvalues positive are called positive definite (semidefinite).A linear system of equations with a positive definite matrix can be efficiently solved using the so-called Cholesky decomposition.M = LLT = RTR (L – lower triangular, R – upper)ATA is always symmetric positive definite provided that all columns of A are independent.
14Singular Value Decomposition based on excelent video lectures by Gilbert Strang, MITLecture 29
15SVD demonstrationwe have vectors x and y perpendicular, find such Ax and Ay that are also perpendicularIf found, let’s label x as v1 and y as v2. And Ax as u1 and Ay as u2.If v1 and v2 are orthonormal, u1 and u1 are orthogonal.But can be normalized – multiplied by numbers σ1 and σ2
16So matrix A acts on vector v1 and vector σ1u1 is obtained → Av1 = σ1u1 And similarly Av2 = σ2u2Generally, this can be written in a matrix form asAV = UΣBut realize, that columns of V and U are orthonormal. U and V are orthogonal matrices!AV = UΣ → A = UΣV-1 → A = UΣVT
17Numbers σ are called singular values. And A = UΣVT is called singular value decomposition (SVD).Factors: orthogonal matrix, diagonal matrix, orthogonal matrix – all good matrices!Σ – singular values (diagonal)columns of U – left singular vectors (orthogonal)columns of V – right singular vectors (orthogonal)Any matrix has SVD!that’s why we actually need two orthogonal matricescolumns of V means rows of VT
18Dimensions in SVDA: m x nU: m x mV: n x nΣ: m x n
19Consider case of square full-rank matrix Am x m. Look again at Av1 = σ1u1σ1u1 is a linear combination of columns of A. Where lies σ1u1?In C(A)And where lies v1?In C(AT)In SVD I’m looking for an orthogonal basis in row space of A that gets knocked over into an orthogonal basis in column space of A.- A: m x n- How many components has v1? n components
20Can I construct an orthogonal basis of C(AT)? Of course I can, Gram-Schmidt does this.However, when I take such a basis and multiply it by A, there is no reason why the result should also be orthogonal.I am looking for the special setup, where A turns orthogonal basis vectors from row space into orthogonal basis in the column space.
21In case A is symmetric positive definite, the SVD becomes A = QΛQT. basis vectorsin row spaceof Abasis vectorsin column spaceof AmultiplyingfactorsIn case A is symmetric positive definite, the SVD becomes A = QΛQT.SVD of positive definite matrix directly reveals its eigendecomposition.
22SVD of 2 x 2 nonsingular matrix U, Σ, V are all of 2 x 2If A is real, U and V are also real.The singular values σ are always real, nonegative, and they are conventionally arranged in descending order on the main diagonal of Σ.
23SVD of 2 x 2 singular matrix SVD is rank revealing factorization. rank(A) is the number of nonzero singular values.Due to the experimental errors matrices are close to singular – rank deficientSVD helps in this situation by revaling the intrinsic dimension. Close to zero singular values can then be set equal to zero.- rank deficient – also rank deffective
24In this example 4 x 4 matrix is generated, with 4th column almost equal to c1 – c2. Then the 4th singular is zeroed, and matrix is reconstructed.Matlab codec1=[ ]'c2=[ ]c3=[ ]'c4 = c1-c *(rand(4,1))A = [c1 c2 c3 c4]format long[U,S,V]=svd(A)U1 = U; S1=S; V1 = V;S1(4,4)=0.0B=U1*S1*V1'A-Bnorm(A-B)help normExtension of vector norms to matrices. It is a nonegative number ||A|| with properties: ||A||>=0, ||kA||=|k| ||A|| for scalar k, ||A+B|| <= ||A|| + ||B||The analysis of matrix-based algorithms often requires use of matrix forms. These algorithms need a way to quantify the “size” of a matrix or the “distance” between two matrices.
25m x n, m > n (overedetermined system) with all columns independentA = [c1 c2 c3][U,S,V]=svd(A)U(1:3,1:3)*S(1:3,1:3)*V'Of course, if we had also some columns dependent, then last singular values on the diagonal would be zero.complete this to an orthonormal basis for the whole column space Rmi.e. these vectors come fromleft null space, which isorthogonal to column spacecomplete this byadding zeros
26For the system m ≥ n, there exist a version of SVD, called thin or economy SVD. The zero block from Σ is removed, as well as are corresponding columns from U.Dimensions of thin SVDA: m x nU: m x nV: n x nΣ: n x nthese are removed
27m x n, n > m (underdetermined system) with all rows independentA = [c1(1:3) c2(1:3) c3(1:3) [ ]'][U,S,V]=svd(A)U*S(1:3,1:3)*V(1:3,1:3)'complete this to an orthonormal basis for the whole row space Rni.e. these vectors come fromnull space, which isorthogonal to row spacecomplete this byadding zeros
28SVD chooses the right basis for the 4 subspaces AV=UΣ Basis of C(AT)Basis of C(A)Basis of N(A)Basis of N(AT)SVD chooses the right basis for the 4 subspacesAV=UΣv1…vr: orthonormal basis in Rn for C(AT)vr+1…vn: in Rn for N(A)u1…ur: in Rm for C(A)ur+1…um: in Rm for N(AT)
29Uniqueness of SVD non-equal singular values (non-degenerate) U, V are unique except for the signs in columnsOnce you decide signs for U (V), signs for V (U) are given so that A = UΣVT is guaranteedequal non-zero singular values (degenerate)not uniqueif u1 and u2 correspond to the degenerate σ, also their any normalized linear combination is a left singular vector corresponding to σthe same is true for right singular vectors
30Uniqueness of SVD zero singular values Corresponding columns of U and V are addedThey form basis of N(AT) or N(A).They are orthonormal each to other and to the rest of right/left singular vectors.And are not unique.
31How do we find SVD? A = UΣVT I’ve got two orthogonal matrices and I don't want to find them both at once.I need some expression so U disappears.Let’s do ATA (nice positive (semi)definite symmetric). What’s the result?ATA = VΣTUTUΣVT = VΣ2VTThis is actually a diagonalization forpositive definite ATA = QΛQTV is found by diagonalizing ATA
32Look at diagonalization of AAT, so V disappears. How to find U?Look at diagonalization of AAT, so V disappears.Or compute 𝑢 1 = 1 𝜎 1 𝐴 𝑣 1So u’s are eigenvectors of ATA, v’s are eigenvectors of AAT.And eigenvalues are square roots of σ’s.They are same for ATA and AAT.They are unique.They have arbitrary order. However, they are conventionally sorted in decreasing order (u, v must be reordered accordingly)
33ExampleRank is?two, invertible, nonsingularI'm going to look for two vectors v1 and v2 in the row space, which of course is what?R2And I'm going to look for u1, u2 in the column space, which is also R2, and I'm going to look for numbers σ1, σ2 so that it all comes out right.
34SVD example 1st step – ATA 2nd step – find its eigenvectors (they’ll b the vs) and eigenvalues (squares of the σ)eigen can be guessed just by lookingwhat are eigenvectors and their corresponding eigenvalues?[1 1]T, λ1 = 32[-1 1]T, λ2 = 18Actually, I made one mistake, do you see which one?I didn’t normalize eigenvectors, they should be [1/sqrt(2), 1/sqrt(2)]T …
353rd step – AAT, find the u What are the eigenvectors? [1 0]T, [0 1]T It’s diagonal, but justby an accident
36SVD expansionDo you remember column times row picture of matrix multiplication?If you look at the UΣVT multiplication as at the column times row problem, then you get the SVD expansion.This is a sum of rank-one matrices. Each of uiviT is called mode.
37This expansion is used in data compression application of SVD This expansion is used in data compression application of SVD. We investigate the spectrum of singular values, and based on it we can decide when to stop the expansion.This actually means that from Σ we remove singular values from the end, and that we remove coresponding columns from U and V.
38SVD image compression load clown colormap('gray') image(X) [U,S,V] = svd(X);plot(diag(S));size(X,1)*size(X,2)p=1;image(U(:,1:p)*S(1:p,1:p)*V(:,1:p)');size(U,1)*p+size(V',1)*pTo reconstruct Ap, we need to store only (m+n)·p words, comparedto m·n needed for storage of the full matrix A.Please note, that if you put p = m (for m<n), then we need m2+m·n,which is much more than m·n. So there is apparently some borderat which storing SVD matrices increases the storage needs!example from Demmel
39Numerical Linear Algebra Matrix Decompositions based on excelent video lectures by Gilbert Strang, MITLecture 29
40Conditioning and stability two fundamental issues in numerical analysisTell us something about the behavior after introduction of small error.Conditioning – perturbation behavior of a mathematical problem.Stability - perturbation behavior of an algorithm.
41ConditioningSystem’s behavior, it has nothing to do with numerical errors (round off).Example: Ax = bwell-conditioned - small change in A or b results in a small change in the solution bill-conditioned - small change in A or b results in a large change in the solution bConditioning is quantified by the condition number, by its coupling with machine epsilon (precission) we can then quantify how many significant digits we can trust in the solution.- from
42well-conditioned system small change in result bill-conditioned systemsmall change in coefficient matrix A- from
43Stability Algorithm’s behavior, involves rounding errors etc. Example: our computer only has two digits of precision (i.e. only two nonzero numbers are allowed)we have array of 100 numbers [1.00, 0.01, 0.01, 0.01, …]and we want to sum themmathematically exact solution is 1.99
44So we take the first number (1. 00), then add another (1 So we take the first number (1.00), then add another (1.01), then another …. (1.10), then another (1.11, however, this can’t be stored in two-dogit precission, so computer rounds down to 1.10), …, so the final result is 1.10 (compare to correct result of 1.99)Or we can sort the array first in ascending order [ , …, 1.00] and then sum: 0.01, 0.02, 0.03, …0.99, 1.99, 1.99 gets round to 2.00 (compare to correct result of 1.99)unstablestable
45Matrix factorizations/decompositions factorization - decomposition of an object into a product of other objects - factors - which when multiplied together give the original .matrix factorization – product of some nicer/simpler matrices that retain particular properties of the original matrix Anicer matrices: orthogonal, diagonal, symmetric, triangular- see also
46System of equations Ax = b System of linear equationsIt is solved by Gaussian eliminationGaussian elimination leads to the LU factorizationU is upper traiangular, L is lower triangular with units on the diagonal
47Ax = b … LUx = b, we can say Ux = y, and solve Ly = b and then Ux = y. Why is it advantageous?Similarly, Ux = y is solved by backward substitution.LU factorization is Gauss elimination, so when should I do LU factorization?If I have more bs, but A does not change. I do LU just once, and then I repeatedly change just RHS (b).forward substitution
48LU factorization is used to calculate matrix determinant. A = LU, det(A) = det(LU) = det(L)det(U) = product of diagonal elements L (one) x product of diagonal elements UUse of QR factorization for Ax = bA = QR, QRx = b → Rx = QTbCompute A = QRCompute y = QTbsolve Rx = y by back-substitution
49However the standard way for Ax = b is Gauss (LU), it requires half of the operations than QR. LU factorization can also be used for matrix inversion (however, standard way is Gauss-Jacobi), L and U are easier to invert.
50Least squares algorithms ATAx = ATbLeast squares via normal equations:Cholesky factorization ATA = RTR leads to RTRx = ATbform ATA and ATbcompute Cholesky ATA = RTRsolve lower triangular system RTw = ATb for wsolve upper triangular system Rx = w for x
51Least squares with QR factorization ATAx = ATbQR factorization of A leads to(QR)TQRx = (QR)Tb → Rx = QTbcompute A = QRcompute the vector QTbsolve upper triangular system Rx = QTb for xThis is method of choice for least squares.
52Eigenvalue algorithms Eigenvalue decomposition A = SΛS-1alternatively AS = SΛthis leads to characteristic equation - polynomial (det λI – A = 0)However, finding roots of polynomial is instable, it can be done for 2 x 2 or 3 x 3, but not for 50 x 50Thus, this decomposition is usually not adopted for eigenproblems.
53Eigenproblem via iterative QR algorithm Will be shown without proof: Other iterative algorithms exist for special purposes (get first 30 eigenvalues, get eigenvalues from the given interval, non-symmetric matrices)A(0) = Afor k = 1, 2, …Q(k)R(k) = A(k-1) QR factorization of A(k-1)A(k) = R(k)Q(k) Recombine factors in reverse order- not QR decomposition (though it forms the basis of the algorithm)