Presentation is loading. Please wait.

Presentation is loading. Please wait.

BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.

Similar presentations


Presentation on theme: "BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06."— Presentation transcript:

1 BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06

2 BMI II SS06 – Class 3 “Linear Algebra” Slide 2 System of linear equations:

3 BMI II SS06 – Class 3 “Linear Algebra” Slide 3 x y 3 1 Vector concepts I

4 BMI II SS06 – Class 3 “Linear Algebra” Slide 4 x y z 3 2 1 Vector concepts II

5 BMI II SS06 – Class 3 “Linear Algebra” Slide 5 Vector concepts III

6 BMI II SS06 – Class 3 “Linear Algebra” Slide 6 x y v1v1 v2v2 θ Vector dot products I

7 BMI II SS06 – Class 3 “Linear Algebra” Slide 7 x y v1v1 v2v2 90° cos90° = 0  v 1 T v 2 = 0 v 1 and v 2 are orthogonal, v 1  v 2 u 1 is u 2 is normalized, a unit vector  u 1 T u 2 = 0 u 1 and u 2 are orthonormal Vector dot products II

8 BMI II SS06 – Class 3 “Linear Algebra” Slide 8 v1v1 v2v2 θ v = ||v|| Projections and components

9 BMI II SS06 – Class 3 “Linear Algebra” Slide 9 + = A B C c ij = a ij + b ij A - B = D d ij = a ij - b ij Matrix sums and differences

10 BMI II SS06 – Class 3 “Linear Algebra” Slide 10 = A B C = D E F = G H J Matrix products I

11 BMI II SS06 – Class 3 “Linear Algebra” Slide 11 (2)(6) + (-1)(0) + (0)(8) + (7)(1) = 12 + 0 + 0 + 7 = 19 Matrix products II

12 BMI II SS06 – Class 3 “Linear Algebra” Slide 12 Matrix multiplication is NOT commutative. Case 1: AB = C, BA does not exist. Matrix products III

13 BMI II SS06 – Class 3 “Linear Algebra” Slide 13 Matrix multiplication is NOT commutative. Case 2: AB = C, BA = D; C and D have different dimensions. Matrix products IV

14 BMI II SS06 – Class 3 “Linear Algebra” Slide 14 Matrix multiplication is NOT commutative. Case 3: AB = C, BA = D; A, B, C and D all are N×N, but C  D. Matrix products V

15 BMI II SS06 – Class 3 “Linear Algebra” Slide 15 However, matrix multiplication is associative: A(BC) = (AB)C. And matrix multiplication is distributive over addition: A(B + C) = AB + AC, (B + C)A = BA + CA, (s 1 + s 2 )A = s 1 A + s 2 A. Matrix products VI

16 BMI II SS06 – Class 3 “Linear Algebra” Slide 16.* = A B C c ij = a ij b ij Scalar multiplication of matrices When using Matlab, we perform matrix multiplication with statements such as >> C = A*B; Matlab also lets us perform term-by-term multiplication of the elements in two matrices: >> C = A.*B; The latter is a very useful and convenient tool to have…, and is NOT a linear algebraic operation. (Look up the Hadamard product of two matrices.)

17 BMI II SS06 – Class 3 “Linear Algebra” Slide 17./ = A B C c ij = a ij / b ij Scalar division of matrices Matlab lets us perform term-by-term division of the elements in two matrices: >> C = A./B; This is a very useful and convenient tool to have…, and is NOT a linear algebraic operation. Division is an undefined operation in linear algebra! However…

18 BMI II SS06 – Class 3 “Linear Algebra” Slide 18 Identity matrix: Matrix inverse: B = A -1 if and only if AB = I and BA = I. Only square matrices can have inverses. Many square matrices don’t have them. Given any square matrix M, IM = MI = M. This is the matrix’s main diagonal. Inverse of a matrix

19 BMI II SS06 – Class 3 “Linear Algebra” Slide 19 Multiplication of inverses: if C = AB and A -1 and B -1 both exist, then C -1 = B -1 A -1. (Sometimes A -1 and B -1 don’t exist, but C -1 still does!) CC -1 = (AB)(B -1 A -1 ) = A(BB -1 )A -1 = AIA -1 = AA -1 = I C -1 C = (B -1 A -1 )(AB) = B -1 (A -1 A)B = B -1 IB = B -1 B = I Analogous rule holds for matrix transposes: if C = AB, then C T = B T A T. Products of matrix inverses or transposes

20 BMI II SS06 – Class 3 “Linear Algebra” Slide 20 A square matrix that is equal to its own transpose, A = A T, is a symmetric matrix. Another way of saying the same thing: a square matrix for which a ij = a ji for all i, j. E.g., Each half of a symmetric matrix is the reflection of the other across the main diagonal. These elements are below the main diagonal. These elements are above the main diagonal. If two matrices A and B are symmetric, so is their product C = AB. It follows that matrix multiplication is commutative, AB = BA, if A and B are symmetric. AB = C = C T = (AB) T = B T A T = BA The i th row is equal to the (transpose of) the i th column Symmetric matrices

21 BMI II SS06 – Class 3 “Linear Algebra” Slide 21 Formally, if Ax = b and A -1 exists, then A -1 Ax = A -1 b A -1 b = (A -1 A)x = Ix = x In practice, we don’t actually compute solutions to linear systems this way. Formal solution, system of linear equations

22 BMI II SS06 – Class 3 “Linear Algebra” Slide 22 Given: a set of N vectors v 1, v 2, …, v N and N scalar constants s 1, s 2, …, s N. The vector u = s 1 v 1 + s 2 v 2 + … + s N v N is a linear combination of v 1, v 2, …, v N. Given: a set of N vectors v 1, v 2, …, v N. If none of the vectors in the set can be expressed as a linear combination of the others, then v 1, v 2, …, v N are linearly independent. If any of v 1, v 2, …, v N is equal to a linear combination of the others, then the vectors are linearly dependent. The set of all possible linear combinations of v 1, v 2, …, v N is called the span of these vectors. The vectors v 1, v 2, …, v N are called the basis of the set of all possible linear combinations. Linear algebraic definitions I

23 BMI II SS06 – Class 3 “Linear Algebra” Slide 23 If v 1, v 2, …, v N are M-dimensional vectors (i.e., M×1 matrices), then v 1, v 2, …, v N can not be linearly independent if N > M. If N  M, then v 1, v 2, …, v N may be linearly independent. x x y y z Linear independence

24 BMI II SS06 – Class 3 “Linear Algebra” Slide 24 A set of N linearly independent M-dimensional vectors spans a N-dimensional vector space. x y z [1 -1 0] T [1 1 -2] T [1 1 1] T The vectors [1 -1 0] T and [1 1 -2] T span a two- dimensional subspace of  3. They are a basis for the subspace consisting of all linear combinations s 1 [1 -1 0] T + s 2 [1 1 -2] T. The one-dimensional sub- space s 1 [1 1 1] T is the orthogonal complement of the preceding 2-D sub- space. Linear algebraic definitions II

25 BMI II SS06 – Class 3 “Linear Algebra” Slide 25 Let A be a N×N square matrix. The number of linearly independent rows is the rank of A. If all N rows are linearly independent, A is of full or maximum rank. The number of linearly independent rows is equal to the number of linearly independent columns. If rank(A) < N, A is singular. If rank(A) = N, A is non-singular. Let B be a M×N rectangular matrix. The maximum possible rank of B is the number of rows or of columns, whichever is smaller. If M N, the rank of B is the number of linearly independent columns. Linear algebraic definitions III

26 BMI II SS06 – Class 3 “Linear Algebra” Slide 26 System of linear equations: What is a Solution? “Row picture”: each equation corresponds to a (N - 1)-dimensional “plane” in N-dimensional space (  N ). The Solution, if it exists, is the point at which all M planes intersect. “Column picture”: the Solution, if it exists, is that linear combination of columns of A which is equal to b.

27 BMI II SS06 – Class 3 “Linear Algebra” Slide 27 When is there a Solution? A (M×N) x (N×1) = b (M×1) If M = N, and A is of full rank (i.e., A is non-singular, rows/columns of A span  N ), then: 1) A -1 exists 2) Ax = b has a unique Solution. 3) Ax = b is a fully or completely determined system. If M < N, and A is of full rank, then: 1) Ax = b has infinitely many Solutions. 2) A has infinitely many N×M right inverses, i.e., matrices A R such that AA R = I (M×M). Every Solution corresponds to some one of these: x = A R b. 3) Of these right inverses, there is one, known as the generalized inverse or pseudoinverse A +, that in some particular sense gives the “best” Solution, x = A + b. 4) Ax = b is an underdetermined system.

28 BMI II SS06 – Class 3 “Linear Algebra” Slide 28 When is there a Solution? A (M×N) x (N×1) = b (M×1) If M > N, and A is of full rank, then: 1) Ax = b sometimes has a Solution, but most often has no Solutions. 2) A has infinitely many N×N left inverses, i.e., matrices A L such that A L A = I (N×N). Every vector x = A L b is a “solution.” 3) Of these left inverses, there is one, known as the generalized inverse or pseudoinverse A +, that in some particular sense gives the “best” “solution,” x = A + b. 4) Ax = b is an overdetermined system. If A is of less than full rank, then: 1) Ax = b has no Solutions. 2) A has a pseudoinverse, A +, that in someparticular sense gives the “best” “solution,” x = A + b. 3) Neither product AA + nor A + A is equal to an identity matrix. That is, A does not have either a left or a right inverse.

29 BMI II SS06 – Class 3 “Linear Algebra” Slide 29 Gaussian elimination I 2÷1=2 4÷1=4 3w = -12 w = -4 2v + 4w = 2v + 4(-4) = 2v - 16 = -11 2v = 5 v = 5 / 2 u + v + w = u + 5 / 2 - 4 = u - 3 / 2 = 5 u = 13 / 2

30 BMI II SS06 – Class 3 “Linear Algebra” Slide 30 Always check Solution!

31 BMI II SS06 – Class 3 “Linear Algebra” Slide 31 Gaussian elimination can be used, if one is so inclined, to find the inverse of a non-singular square matrix. Gaussian elimination II

32 BMI II SS06 – Class 3 “Linear Algebra” Slide 32 What happens if we try to use Gaussian elimination to solve Ax = b, but A is singular? After first round of elimination: After second round of elimination: There is no Solution! These two equations are inconsistent. Gaussian elimination III

33 BMI II SS06 – Class 3 “Linear Algebra” Slide 33 What if Ax = b is of maximal rank, but underdetermined? This is as far as we can go! 2v + 4w = -11  v = -2w - 11 / 2 u + v + w = u + (-2w - 11 / 2 ) + w = u - w - 11 / 2 = 5  u = w + 21 / 2 Both remaining variables are defined in terms of w, the free variable. Gaussian elimination IV

34 BMI II SS06 – Class 3 “Linear Algebra” Slide 34 u v w s = 0: [u v w] T = [10.5 -5.5 0] T s = -1: [u v w] T = [9.5 -3.5 -1] T s = -2: [u v w] T = [8.5 -1.5 -2] T is the minimum norm, or minimum power, or pseudoinverse, solution. Gaussian elimination V

35 BMI II SS06 – Class 3 “Linear Algebra” Slide 35 x = A -1 b (A T ) -1 x = (A T ) -1 A -1 b = (AA T ) -1 b A T (A T ) -1 x = x = A T (AA T ) -1 b A +  A T (AA T ) -1 Gaussian elimination VI

36 BMI II SS06 – Class 3 “Linear Algebra” Slide 36 Corresponds to s = -43 / 12 Gaussian elimination VII

37 BMI II SS06 – Class 3 “Linear Algebra” Slide 37 What if Ax = b is of full rank, but overdetermined? y = ax + b Gaussian elimination VIII

38 BMI II SS06 – Class 3 “Linear Algebra” Slide 38 What if Ax = b is of full rank, but overdetermined? Gaussian elimination VIII

39 BMI II SS06 – Class 3 “Linear Algebra” Slide 39 What if Ax = b is of full rank, but overdetermined? Ax = b; A (M×N), M > N Let Y be any N×M matrix for the product YA (N×N) is non- singular (i.e., invertible). Then: YAx = Yb (YA) -1 YAx = x = (YA) -1 Yb Thus (YA) -1 Y is a left inverse of A. Notice that the x so obtained does not solve the original system. (How could it? Ax = b does not have a Solution!) Is there a particular choice of Y that gives us a “solution” that is in some sense the best? Gaussian elimination IX

40 BMI II SS06 – Class 3 “Linear Algebra” Slide 40 “solution” to an overdetermined system Optimal choice for Y turns out to be Y = A T : A + = A T (A T A) -1 x = A + b = A T (A T A) -1 b In what sense is this “solution” optimal, or best? b column space of A: s 1 a 1 + s 2 a 2 + … Ax + : projection of b onto s 1 a 1 + s 2 a 2 + … Gaussian elimination X

41 BMI II SS06 – Class 3 “Linear Algebra” Slide 41 Overdetermined system example I

42 BMI II SS06 – Class 3 “Linear Algebra” Slide 42 Instead of explicitly computing the pseudoinverse, it is more efficient to use Gaussian elimination to solve the 4×4 system. Overdetermined system example II

43 BMI II SS06 – Class 3 “Linear Algebra” Slide 43 7 / 2 x 4 = 7  x 4 = 2 6x 3 + x 4 = 6x 3 + 2 = 26  x 3 = 4 5x 2 + x 3 + x 4 = 5x 2 + 4 + 2 = 5x 2 + 6 = 21  x 2 = 3 4x 1 + x 2 + x 3 + x 4 = 4x 1 + 3 + 4 + 2 = 4x 1 + 9 = 13  x 1 = 1 Overdetermined system example III

44 BMI II SS06 – Class 3 “Linear Algebra” Slide 44 Some systems have a Solution, but… The second and third systems are ill-conditioned. (The first is ill-posed, but we’ll hold off discussion of that concept for another time.) A small change in A yields a large change in x. The conditioning of a system is intimately related to the angles between the rows/columns of A. The smaller the angle between any two rows or columns, the more ill-conditioned the system is. The closer all rows/columns are to being orthogonal, the more well- conditioned.

45 BMI II SS06 – Class 3 “Linear Algebra” Slide 45 Significance of angle between x and b Given an arbitrary N×N matrix A and N×1 vector x: ordinarily, b = Ax is different from x in both magnitude and direction. x b However, for any A there will always be some particular directions such that b will be parallel to x (i.e., b is a simple scalar multiple of x, or Ax = λx) if x lies in one of these directions. An x that satisfies Ax = λx is an eigenvector, and λ is the corresponding eigenvalue.

46 BMI II SS06 – Class 3 “Linear Algebra” Slide 46 Significance of eigenvalues and eigenvectors An N×N A always has N eigenvalues. If A is symmetric, and λ 1 and λ 2 are two distinct eigenvalues, the corresponding eigenvectors x 1 and x 2 are necessarily orthogonal. If λ 1 = λ 2, we can always use the method described earlier to subtract off x 1 ’s projection onto x 2 from x 1. If A is not symmetric, then its eigenvectors generally are not mutually orthogonal. But recall that the matrices AA T and A T A are always symmetric. The square roots of the eigenvalues of AA T or A T A are the singular values of A. The eigenvectors of AA T or A T A are the singular vectors of A. Computation of the eigenvalues and eigenvectors of AA T and A T A underlies a very useful linear algebraic technique called singular value decomposition (SVD). SVD is the method that allows us to, among other things, tackle the one case we have not yet seen an explicit example of: finding the “solution” of a linear system when A is not of full rank.

47 BMI II SS06 – Class 3 “Linear Algebra” Slide 47 What happens if we try to use Gaussian elimination to solve Ax = b, but A is singular? After second round of elimination: There is no Solution! These two equations are inconsistent. Gaussian elimination III (cont.) But there is a pseudoinverse, A +, which we can find by using SVD:

48 BMI II SS06 – Class 3 “Linear Algebra” Slide 48 As indicated, for this case AA + ≠ I and AA + ≠ I: Gaussian elimination III (cont.) What is the pseudoinverse “solution,” and what is its significance?

49 BMI II SS06 – Class 3 “Linear Algebra” Slide 49 We are not surprised that b + ≠ b, because we already knew that the original system has no Solution. Gaussian elimination III (cont.) That is, that no linear combination of the columns of A is equal to b. However, the “solution” x + gives us that linear combination of columns of A which is closest to b, in the sense of minimizing the distance between Ax and b.

50 BMI II SS06 – Class 3 “Linear Algebra” Slide 50 Homogeneous Linear System: Ax = 0 Even without performing Gaussian elimination + backsubstitution, we know right away that the unique Solution is [u v w] T = [0 0 0] T. How do we know this? 1) For any matrix A with finite elements, it is the case that A·0 = 0. 2) In the specific system shown here, A is non-singular, which means that there is one and only one Solution.

51 BMI II SS06 – Class 3 “Linear Algebra” Slide 51 Homogeneous Linear System: Ax = 0 After first round of elimination: After second round of elimination:

52 BMI II SS06 – Class 3 “Linear Algebra” Slide 52 Homogeneous Linear System: Ax = 0 s[1 -2 1] T is the nullspace of the matrix A. A non-singular matrix’s nullspace consists of only the 0 vector. Only singular matrices have nullspaces with a non-zero number of dimensions. Suppose A represents the action of a linear system, such as a instrument used to detect some type of physical signal. The nullspace is a class of input signals that the device can not detect or measure.

53 BMI II SS06 – Class 3 “Linear Algebra” Slide 53 Homogeneous Linear System: Ax = 0 Recall definition of eigenvectors and eigenvalues: Ax = λx, x  0. Then Ax - λx = Ax - λIx = (A - λI)x = 0. That is, the eigenvalues are those specific values of λ for which the matrix A - λI is singular, and the eigenvectors are the corresponding nullspaces.


Download ppt "BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06."

Similar presentations


Ads by Google