Download presentation

Presentation is loading. Please wait.

Published byAlicia Monger Modified over 2 years ago

1
Chapter 3 Determinants 3.1 The Determinant of a Matrix 3.2 Evaluation of a Determinant using Elementary Row Operations 3.3 Properties of Determinants 3.4 Application of Determinants 3.1

2
3.2 The determinant is NOT a matrix operation The determinant is a kind of information extracted from a square matrix to reflect some characteristics of that square matrix For example, this chapter will discuss that matrices with zero determinant are with very different characteristics from those with non-zero determinant The motive to find this information is to identify the characteristics of matrices and thus facilitate the comparison between matrices since it is impossible to compare matrices entry by entry The similar idea is to compare groups of numbers through the calculation of averages and standard deviations Not only the determinant but also the eigenvalues and eigenvectors are the information that can be used to identify the characteristics of square matrices

3
The Determinant of a Matrix The determinant ( ) of a 2 × 2 matrix: Note: 1. For every square matrix, there is a real number associated with this matrix and called its determinant 2. It is common practice to omit the matrix brackets

4
3.4 Historically speaking, the use of determinants arose from the recognition of special patterns that occur in the solutions of linear systems: Note: 1. x 1 and x 2 have the same denominator, and this quantity is called the determinant of the coefficient matrix A 2. There is a unique solution if a 11 a 22 – a 21 a 12 = |A| 0

5
3.5 Ex. 1: The determinant of a matrix of order 2 Note: The determinant of a matrix can be positive, zero, or negative

6
3.6 Minor ( ) of the entry a ij : the determinant of the matrix obtained by deleting the i-th row and j-th column of A Cofactor ( ) of a ij : M ij is a real number C ij is also a real number

7
3.7 Ex: Notes: Sign pattern for cofactors. Odd positions (where i+j is odd) have negative signs, and even positions (where i+j is even) have positive signs. (Positive and negative signs appear alternately at neighboring positions.)

8
3.8 Theorem 3.1: Expansion by cofactors ( ) (cofactor expansion along the i-th row, i=1, 2,…, n) (cofactor expansion along the j-th column, j=1, 2,…, n) Let A be a square matrix of order n, then the determinant of A is given by or The determinant can be derived by performing the cofactor expansion along any row or column of the examined matrix

9
3.9 Ex: The determinant of a square matrix of order 3

10
3.10 Ex 3: The determinant of a square matrix of order 3 Sol:

11
3.11 Alternative way to calculate the determinant of a square matrix of order 3: Add these three products Subtract these three products

12
3.12 Ex: Recalculate the determinant of the square matrix A in Ex 3 – This method is only valid for matrices with the order of 3

13
3.13 Ex 4: The determinant of a square matrix of order 4

14
3.14 Sol: By comparing Ex 4 with Ex 3, it is apparent that the computational effort for the determinant of 4×4 matrices is much higher than that of 3×3 matrices. In the next section, we will learn a more efficient way to calculate the determinant

15
3.15 Upper triangular matrix ( ): Lower triangular matrix ( ): Diagonal matrix ( ): All entries below the main diagonal are zeros All entries above the main diagonal are zeros All entries above and below the main diagonal are zeros Ex: upper triangular lower triangular diagonal

16
3.16 Theorem 3.2: (Determinant of a Triangular Matrix) If A is an n n triangular matrix (upper triangular, lower triangular, or diagonal), then its determinant is the product of the entries on the main diagonal. That is On the next slide, I only take the case of upper triangular matrices for example to prove Theorem 3.2. It is straightforward to apply the following proof for the cases of lower triangular and diagonal matrices

17
3.17 Pf: by Mathematical Induction ( ) Suppose that the theorem is true for any upper triangular matrix U of order n–1, i.e., Then consider the determinant of an upper triangular matrix A of order n by the cofactor expansion across the n-th row Since M nn is the determinant of a (n–1)×(n–1) upper triangular matrix by deleting the n-th row and n-th column of A, we can apply the induction assumption to write

18
3.18 Ex 6: Find the determinants of the following triangular matrices (a) (b) |A| = (2)(–2)(1)(3) = –12 |B| = (–1)(3)(2)(4)(–2) = 48 (a) (b) Sol:

19
3.19 Keywords in Section 3.1: determinant: minor: cofactor: expansion by cofactors: upper triangular matrix: lower triangular matrix: diagonal matrix:

20
Evaluation of a Determinant Using Elementary Row Operations Theorem 3.3: Elementary row operations and determinants Let A and B be square matrices Notes: The above three properties remains valid if elementary column operations are performed to derive column-equivalent matrices (This result will be used in Ex 5 on Slide 3.25) The computational effort to calculate the determinant of a square matrix with a large number of n is unacceptable. In this section, I will show how to reduce the computational effort by using elementary operations

21
3.21 Ex:

22
3.22 Row reduction method to evaluate the determinant 1. A row-echelon form of a square matrix is either an upper triangular matrix or a matrix with zero rows 2. It is easy to calculate the determinant of an upper triangular matrix (by Theorem 3.2) or a matrix with zero rows (det = 0) Ex 2: Evaluation a determinant using elementary row operations Sol: Notes:

23
3.23 Notes:

24
3.24 Cofactor ExpansionRow Reduction Order n AdditionsMultiplications Additions Multiplications ,628,7996,235, Comparison between the number of required operations for the two kinds of methods to calculate the determinant When evaluating a determinant by hand, you can sometimes save steps by integrating this two kinds of methods (see Examples 5 and 6 in the next three slides)

25
3.25 Ex 5: Evaluating a determinant using column reduction and cofactor expansion Sol: is the counterpart column operation to the row operation

26
3.26 Ex 6: Evaluating a determinant using both row and column reductions and cofactor expansion Sol:

27
3.27

28
3.28 Theorem 3.4: Conditions that yield a zero determinant (a) An entire row (or an entire column) consists of zeros (b) Two rows (or two columns) are equal (c) One row (or column) is a multiple of another row (or column) If A is a square matrix and any of the following conditions is true, then det(A) = 0 Notes: For conditions (b) or (c), you can use elementary row or column operations to create an entire row or column of zeros Thus, we can conclude that a square matrix has a determinant of zero if and only if it is row- (or column-) equivalent to a matrix that has at least one row (or column) consisting entirely of zeros

29
3.29 Ex:

30
Properties of Determinants Notes: Theorem 3.5: Determinant of a matrix product (1)det(EA) = det(E) det(A) For elementary matrices shown in Theorem 3.3, (3) (4) det(AB) = det(A) det(B) (There is an example to verify this property on Slide 3.33) (Note that this property is also valid for all rows or columns other than the second row) (2)

31
3.31 Ex 1: The determinant of a matrix product Sol: Find |A|, |B|, and |AB|

32
3.32 |AB| = |A| |B| Check:

33
3.33 Ex: Pf:

34
3.34 Ex 2: Sol: Theorem 3.6: Determinant of a scalar multiple of a matrix If A is an n × n matrix and c is a scalar, then det(cA) = c n det(A) (can be proven by repeatedly use the fact that )

35
3.35 Theorem 3.7: (Determinant of an invertible matrix) A square matrix A is invertible (nonsingular) if and only if det(A) 0 Pf: If A is invertible, then AA –1 = I. By Theorem 3.5, we can have |A||A –1 |=|I|. Since |I|=1, neither |A| nor |A –1 | is zero Suppose |A| is nonzero. It is aimed to prove A is invertible. By the Gauss-Jordan elimination, we can always find a matrix B, in reduced row-echelon form, that is row- equivalent to A 1. Either B has at least one row with entire zeros, then |B|=0 and thus |A|=0 since |E k |…|E 2 ||E 1 ||A|=|B|. 2. Or B=I, then A is row-equivalent to I, and by Theorem 2.15 (Slide 2.59), it can be concluded that A is invertible

36
3.36 Ex 3: Classifying square matrices as singular or nonsingular A has no inverse (it is singular) B has inverse (it is nonsingular) Sol:

37
3.37 Ex 4: (a) (b) Sol: Theorem 3.8: Determinant of an inverse matrix Theorem 3.9: Determinant of a transpose (Based on the mathematical induction ( ), compare the cofactor expansion along the row of A and the cofactor expansion along the column of A T )

38
3.38 The similarity between the noninvertible matrix and the real number 0 Matrix AReal number c Invertible Noninvertible

39
3.39 If A is an n × n matrix, then the following statements are equivalent (1) A is invertible (2) Ax = b has a unique solution for every n × 1 matrix b (3) Ax = 0 has only the trivial solution (4) A is row-equivalent to I n (5) A can be written as the product of elementary matrices (6) det(A) 0 Equivalent conditions for a nonsingular matrix: The statements (1)-(5) are collected in Theorem 2.15, and the statement (6) is from Theorem 3.7 (Thm. 2.11) (Thm. 2.14) (Thm. 3.7)

40
3.40 Ex 5: Which of the following system has a unique solution? (a) (b)

41
3.41 Sol: (a) This system does not have a unique solution (b) This system has a unique solution

42
Applications of Determinants Matrix of cofactors ( ) of A: Adjoint matrix ( ) of A: (The definition of cofactor C ij and minor M ij of a ij can be reviewed on Slide 3.6)

43
3.43 Theorem 3.10 : The inverse of a matrix expressed by its adjoint matrix If A is an n × n invertible matrix, then Consider the product A[adj(A)] The entry at the position (i, j) of A[adj(A)] Pf:

44
3.44 Consider a matrix B similar to matrix A except that the j-th row is replaced by the i-th row of matrix A Perform the cofactor expansion along the j-th row of matrix B (Note that C j1, C j2,…, and C jn are still the cofactors of a ij ) A -1 Since there are two identical rows in B, according to Theorem 3.4, det(B) should be zero

45
3.45 Ex: For any 2×2 matrix, its inverse can be calculated as follows

46
3.46 Ex 2: (a) Find the adjoint matrix of A (b) Use the adjoint matrix of A to find A –1 Sol:

47
3.47 cofactor matrix of Aadjoint matrix of A inverse matrix of A Check: The computational effort of this method to derive the inverse of a matrix is higher than that of the G.-J. E. (especially to compute the cofactor matrix for a higher-order square matrix) However, for computers, it is easier to implement this method than the G.-J. E. since it is not necessary to judge which row operation should be used and the only thing needed to do is to calculate determinants of matrices

48
3.48 Theorem 3.11 : Cramers Rule A (i) represents the i-th column vector in A

49
3.49

50
3.50 Pf: Ax = b (according to Thm. 3.10)

51
3.51

52
3.52 Ex 6: Use Cramers rule to solve the system of linear equation Sol:

53
3.53 Keywords in Section 3.4: matrix of cofactors: adjoint matrix: Cramers rule: Cramer

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google