# Chapter 3 Determinants 3.1 The Determinant of a Matrix

## Presentation on theme: "Chapter 3 Determinants 3.1 The Determinant of a Matrix"— Presentation transcript:

Chapter 3 Determinants 3.1 The Determinant of a Matrix
3.2 Evaluation of a Determinant using Elementary Row Operations 3.3 Properties of Determinants 3.4 Application of Determinants 3.1

※ The determinant is NOT a matrix operation
※ The determinant is a kind of information extracted from a square matrix to reflect some characteristics of that square matrix ※ For example, this chapter will discuss that matrices with zero determinant are with very different characteristics from those with non-zero determinant ※ The motive to find this information is to identify the characteristics of matrices and thus facilitate the comparison between matrices since it is impossible to compare matrices entry by entry ※ The similar idea is to compare groups of numbers through the calculation of averages and standard deviations ※ Not only the determinant but also the eigenvalues and eigenvectors are the information that can be used to identify the characteristics of square matrices

3.1 The Determinant of a Matrix
Note: 1. For every square matrix, there is a real number associated with this matrix and called its determinant 2. It is common practice to omit the matrix brackets

Historically speaking, the use of determinants arose from the recognition of special patterns that occur in the solutions of linear systems: Note: 1. x1 and x2 have the same denominator, and this quantity is called the determinant of the coefficient matrix A 2. There is a unique solution if a11a22 – a21a12 = |A| ≠ 0

Ex. 1: The determinant of a matrix of order 2
Note: The determinant of a matrix can be positive, zero, or negative

Minor (子行列式) of the entry aij: the determinant of the matrix obtained by deleting the i-th row and j-th column of A ※ Mij is a real number Cofactor (餘因子) of aij: ※ Cij is also a real number

Ex: Notes: Sign pattern for cofactors. Odd positions (where i+j is odd) have negative signs, and even positions (where i+j is even) have positive signs. (Positive and negative signs appear alternately at neighboring positions.)

Theorem 3.1: Expansion by cofactors (餘因子展開)
Let A be a square matrix of order n, then the determinant of A is given by (cofactor expansion along the i-th row, i=1, 2,…, n) or (cofactor expansion along the j-th column, j=1, 2,…, n) ※The determinant can be derived by performing the cofactor expansion along any row or column of the examined matrix

Ex: The determinant of a square matrix of order 3

Ex 3: The determinant of a square matrix of order 3
Sol:

Alternative way to calculate the determinant of a square matrix of order 3:
Subtract these three products Add these three products

Ex: Recalculate the determinant of the square matrix A in Ex 3
–4 6 16 ※ This method is only valid for matrices with the order of 3

Ex 4: The determinant of a square matrix of order 4

Sol: ※ By comparing Ex 4 with Ex 3, it is apparent that the computational effort for the determinant of 4×4 matrices is much higher than that of 3×3 matrices. In the next section, we will learn a more efficient way to calculate the determinant

Upper triangular matrix (上三角矩陣):
All entries below the main diagonal are zeros Lower triangular matrix (下三角矩陣): All entries above the main diagonal are zeros Diagonal matrix (對角矩陣): All entries above and below the main diagonal are zeros Ex: upper triangular lower triangular diagonal

Theorem 3.2: (Determinant of a Triangular Matrix)
If A is an n  n triangular matrix (upper triangular, lower triangular, or diagonal), then its determinant is the product of the entries on the main diagonal. That is ※ On the next slide, I only take the case of upper triangular matrices for example to prove Theorem 3.2. It is straightforward to apply the following proof for the cases of lower triangular and diagonal matrices

Pf: by Mathematical Induction (數學歸納法)
Suppose that the theorem is true for any upper triangular matrix U of order n–1, i.e., Then consider the determinant of an upper triangular matrix A of order n by the cofactor expansion across the n-th row Since Mnn is the determinant of a (n–1)×(n–1) upper triangular matrix by deleting the n-th row and n-th column of A, we can apply the induction assumption to write

Ex 6: Find the determinants of the following triangular matrices
(b) Sol: (a) |A| = (2)(–2)(1)(3) = –12 (b) |B| = (–1)(3)(2)(4)(–2) = 48

Keywords in Section 3.1: determinant: 行列式 minor: 子行列式 cofactor: 餘因子
expansion by cofactors: 餘因子展開 upper triangular matrix: 上三角矩陣 lower triangular matrix: 下三角矩陣 diagonal matrix: 對角矩陣

3.2 Evaluation of a Determinant Using Elementary Row Operations
The computational effort to calculate the determinant of a square matrix with a large number of n is unacceptable. In this section, I will show how to reduce the computational effort by using elementary operations Theorem 3.3: Elementary row operations and determinants Let A and B be square matrices Notes: The above three properties remains valid if elementary column operations are performed to derive column-equivalent matrices (This result will be used in Ex 5 on Slide 3.25)

Ex:

Row reduction method to evaluate the determinant
1. A row-echelon form of a square matrix is either an upper triangular matrix or a matrix with zero rows 2. It is easy to calculate the determinant of an upper triangular matrix (by Theorem 3.2) or a matrix with zero rows (det = 0) Ex 2: Evaluation a determinant using elementary row operations Sol: Notes:

Notes:

Comparison between the number of required operations for the two kinds of methods to calculate the determinant Cofactor Expansion Row Reduction Order n Additions Multiplications 3 5 9 10 119 205 30 45 3,628,799 6,235,300 285 339 ※ When evaluating a determinant by hand, you can sometimes save steps by integrating this two kinds of methods (see Examples 5 and 6 in the next three slides)

Ex 5: Evaluating a determinant using column reduction and cofactor expansion
Sol: ※ is the counterpart column operation to the row operation

Ex 6: Evaluating a determinant using both row and column reductions and cofactor expansion
Sol:

Theorem 3.4: Conditions that yield a zero determinant
If A is a square matrix and any of the following conditions is true, then det(A) = 0 (a) An entire row (or an entire column) consists of zeros (b) Two rows (or two columns) are equal (c) One row (or column) is a multiple of another row (or column) Notes: For conditions (b) or (c), you can use elementary row or column operations to create an entire row or column of zeros ※ Thus, we can conclude that a square matrix has a determinant of zero if and only if it is row- (or column-) equivalent to a matrix that has at least one row (or column) consisting entirely of zeros

Ex:

3.3 Properties of Determinants
Theorem 3.5: Determinant of a matrix product det(AB) = det(A) det(B) Notes: det(EA) = det(E) det(A) ※ For elementary matrices shown in Theorem 3.3, (2) (3) (4) (There is an example to verify this property on Slide 3.33) (Note that this property is also valid for all rows or columns other than the second row)

Ex 1: The determinant of a matrix product
Find |A|, |B|, and |AB| Sol:

Check: |AB| = |A| |B|

Ex: Pf:

Theorem 3.6: Determinant of a scalar multiple of a matrix
If A is an n × n matrix and c is a scalar, then det(cA) = cn det(A) (can be proven by repeatedly use the fact that ) Ex 2: Sol:

Theorem 3.7: (Determinant of an invertible matrix)
A square matrix A is invertible (nonsingular) if and only if det(A)  0 Pf: If A is invertible, then AA–1 = I. By Theorem 3.5, we can have |A||A–1|=|I|. Since |I|=1, neither |A| nor |A–1| is zero Suppose |A| is nonzero. It is aimed to prove A is invertible. By the Gauss-Jordan elimination, we can always find a matrix B, in reduced row-echelon form, that is row- equivalent to A 1. Either B has at least one row with entire zeros, then |B|=0 and thus |A|=0 since |Ek|…|E2||E1||A|=|B|. →← 2. Or B=I, then A is row-equivalent to I, and by Theorem (Slide 2.59), it can be concluded that A is invertible

Ex 3: Classifying square matrices as singular or nonsingular
Sol: A has no inverse (it is singular) B has inverse (it is nonsingular)

Theorem 3.8: Determinant of an inverse matrix
Theorem 3.9: Determinant of a transpose (Based on the mathematical induction (數學歸納法), compare the cofactor expansion along the row of A and the cofactor expansion along the column of AT) Ex 4: (a) (b) Sol:

The similarity between the noninvertible matrix and the real number 0
Real number c Invertible Noninvertible

Equivalent conditions for a nonsingular matrix:
If A is an n × n matrix, then the following statements are equivalent (1) A is invertible (2) Ax = b has a unique solution for every n × 1 matrix b (Thm. 2.11) (3) Ax = 0 has only the trivial solution (Thm. 2.11) (4) A is row-equivalent to In (Thm. 2.14) (5) A can be written as the product of elementary matrices (Thm. 2.14) (6) det(A)  0 (Thm. 3.7) ※ The statements (1)-(5) are collected in Theorem 2.15, and the statement (6) is from Theorem 3.7

Ex 5: Which of the following system has a unique solution?
(b)

Sol: (a) This system does not have a unique solution (b) This system has a unique solution

3.4 Applications of Determinants
Matrix of cofactors (餘因子矩陣) of A: (The definition of cofactor Cij and minor Mij of aij can be reviewed on Slide 3.6) Adjoint matrix (伴隨矩陣) of A:

Theorem 3.10: The inverse of a matrix expressed by its adjoint matrix
If A is an n × n invertible matrix, then Pf: Consider the product A[adj(A)] The entry at the position (i, j) of A[adj(A)]

Consider a matrix B similar to matrix A except that the j-th row is replaced by the i-th row of matrix A ※ Since there are two identical rows in B, according to Theorem 3.4, det(B) should be zero Perform the cofactor expansion along the j-th row of matrix B (Note that Cj1, Cj2,…, and Cjn are still the cofactors of aij) A-1

Ex: For any 2×2 matrix, its inverse can be calculated as follows

Ex 2: (a) Find the adjoint matrix of A (b) Use the adjoint matrix of A to find A–1 Sol:

cofactor matrix of A adjoint matrix of A inverse matrix of A Check:
※ The computational effort of this method to derive the inverse of a matrix is higher than that of the G.-J. E. (especially to compute the cofactor matrix for a higher-order square matrix) ※ However, for computers, it is easier to implement this method than the G.-J. E. since it is not necessary to judge which row operation should be used and the only thing needed to do is to calculate determinants of matrices Check:

Theorem 3.11: Cramer’s Rule
A(i) represents the i-th column vector in A

Pf: Ax = b (according to Thm. 3.10)

Ex 6: Use Cramer’s rule to solve the system of linear equation

Keywords in Section 3.4: matrix of cofactors: 餘因子矩陣
adjoint matrix: 伴隨矩陣 Cramer’s rule: Cramer 法則