Chapter 3 Linear Algebra

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Matrix Representation
Elementary Linear Algebra Anton & Rorres, 9th Edition
Y.M. Hu, Assistant Professor, Department of Applied Physics, National University of Kaohsiung Matrix – Basic Definitions Chapter 3 Systems of Differential.
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
Chapter 2 Matrices Definition of a matrix.
Chapter 1 Vector analysis
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
資訊科學數學11 : Linear Equation and Matrices
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Graphics CSE 581 – Interactive Computer Graphics Mathematics for Computer Graphics CSE 581 – Roger Crawfis (slides developed from Korea University slides)
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
ECON 1150 Matrix Operations Special Matrices
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
 Row and Reduced Row Echelon  Elementary Matrices.
Modern Navigation Thomas Herring
Matrices & Determinants Chapter: 1 Matrices & Determinants.
1 MAC 2103 Module 3 Determinants. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Determine the minor, cofactor,
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
The Mathematics for Chemists (I) (Fall Term, 2004) (Fall Term, 2005) (Fall Term, 2006) Department of Chemistry National Sun Yat-sen University 化學數學(一)
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Mathematical Tools of Quantum Mechanics
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
Linear Algebra Chapter 2 Matrices.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Lecture XXVI.  The material for this lecture is found in James R. Schott Matrix Analysis for Statistics (New York: John Wiley & Sons, Inc. 1997).  A.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
MTH108 Business Math I Lecture 20.
MAT 322: LINEAR ALGEBRA.
Matrices and Vector Concepts
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
7.7 Determinants. Cramer’s Rule
CS479/679 Pattern Recognition Dr. George Bebis
Part B. Linear Algebra, Vector Calculus
Chapter 1 Linear Equations and Vectors
Review of Matrix Operations
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
5 Systems of Linear Equations and Matrices
Linear independence and matrix rank
Systems of First Order Linear Equations
DETERMINANT MATRIX YULVI ZAIKA.
Lecture on Linear Algebra
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Elementary Linear Algebra Anton & Rorres, 9th Edition
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Chapter 2 Determinants.
Presentation transcript:

Chapter 3 Linear Algebra February 26 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Matrix of coefficients: Augmented matrix: Elementary row operations: Row switching. Row multiplication by a nonzero number. Adding a multiple of a row to another row. These operations are reversible.

Solving a set of linear equations by row reduction: Row reduced matrix

Rank of a matrix: The number of the nonzero rows in the row reduced matrix. It is the maximal number of linearly independent row (or column) vectors of the matrix. Rank of A = Rank of AT (where (AT)ij= (A)ji, transpose matrix). Possible cases of the solution of a set of linear equation: Solving m equations with n unknowns: If Rank M < Rank A, the equations are inconsistent and there is no solution. If Rank M = Rank A = n, there is one solution. If Rank M = Rank A = R<n, then R unknowns can be expressed by the remaining n−R unknowns.

Read: Chapter 3: 1-2 Homework: 3.2.4,8,10,12,15. Due: March 9

3.3 Determinates; Cramer’s rule March 5 Determinants 3.3 Determinates; Cramer’s rule Determinate of an n×n matrix: Minor: i j Cofactor: Determinate of a 1×1 matrix: Definition of the determinate of an n×n matrix:

Equivalent methods: A determinate can be expanded by any row or any column: |A|=|AT|. Example p90.1 Triple product of 3 vectors: Useful properties of determinants: A common factor in a row (column) may be factored out. Interchanging two rows (columns) changes the sign of the determinant. A multiple of one row (column) can be added to another row (column) without changing the determinant. The determinate is zero if two rows (columns) are identical or proportional. Example p91.2.

Cramer’s rule in solving a set of linear equations: Theorem: For homogeneous linear equations, the determinant of the coefficient matrix must be zero for a nontrivial solution to exist.

Read: Chapter 3: 3 Homework: 3.3.1,10,15,17. (No computer work is needed.) Due: March 23

March 9 Vectors 3.4 Vectors Vector: A quantity that has both a magnitude and a direction. Geometrical representation of a vector: An arrow with a length and a direction . Addition and subtraction: Vector addition is commutative and associative. Algebraic representation of a vector: Magnitude of a vector: Note: A is a vector and A is its length. They should be distinguished.

Parallel and perpendicular vectors: Scalar or dot product: B A q Vector or cross product: Cross product in determinant form: B A q A×B Parallel and perpendicular vectors: Relations between the basis vectors: Examples p102.3, p105.4. Problems 4.5,26.

Read: Chapter 3: 4 Homework: 3.4.5,7,12,18,26. Due: March 23

March 19 Lines and planes 3.5 Lines and planes y z A r−r0 x Equations for a straight line: Equations for a plane: N r−r0 Examples p109.1, 2.

Distance from a point to a plane: Q q Example p110.3. Distance from a point to a line: A P R Q q Example p110.4. Distance between two skew lines: Let FG be the shortest distance, then it must be perpendicular to both lines (proof). F A P n Q B Example p110.5,6. Problems 5.12,18,42. G

Read: Chapter 3: 5 Homework: 3.5.7,12,18,20,26,32,37,42. Due: March 30

March 21,23 Matrix operations Matrix equation: Multiplication by a number: Matrix addition:

Matrix multiplication: Note: The element on the ith row and jth column of AB is equal to the dot product between the ith row of A and the jth column of B. The number of columns in A is equal to the number of rows in B. Example: More about matrix multiplication: The product is associative: A(BC)=(AB)C The product is distributive: A(B+C)=AB+AC In general the product is not commutative: ABBA. [A,B]=AB−BA is called the commutator. Unit matrix: Zero matrix: Product theorem:

Solving a set of linear equations by matrix operations: Matrix inversion: M−1 is the inverse of M if MM−1= M−1M =I=1. Only square matrix can be inversed. det (M M−1) = det(M) det( M−1) = det(I)=1, so det (M)0 is necessary for M to be invertible. Calculating the inverse matrix: Example p120.3.

Three equivalent ways of solving a set of linear equations: Row reduction Cramer’s rule Inverse matrix: r=M-1k Equivalence between r=M-1k and the Cramer’s rule: Cramer’s rule is an actual realization of r=M-1k.

Gauss-Jordan method of matrix inversion: Let (MLp MLp-1 MLp-2 … ML2 ML1 ) M = MLM = I be the result of a series of elementary row operations on M, then (MLp MLp-1 MLp-2 … ML2 ML1 ) I = MLI = M−1. That is, Equivalence between row reduction and r=M-1k: Row reduction is to decompose the M−1 in r=M−1k into many steps.

Rotation matrices: Rotation of vectors y x q Functions of matrices: Expansion or power series expansion is implied. Examples:

Read: Chapter 3: 6 Homework: 3.6.9,13,15,18,21 Due: March 30

March 26, 28 Linear operators 3.7 Linear combinations, linear functions, linear operators Linear combination: aA + bB Linear function f (r): Linear operator O: Example p125.1; Problem 7.15 Linear transformation: The matrix M is a linear operator representing a linear transformation.

Orthogonal transformation: An orthogonal transformation preserves the length of a vector. Orthogonal matrix : The matrix for an orthogonal transformation is an orthogonal matrix. Theorem: M is an orthogonal matrix if and only if MT = M−1. Theorem: det M=1 if M is orthogonal.

2×2 orthogonal matrix: Conclusion: A 2×2 orthogonal matrix corresponds to either a rotation (with det M=1) or a reflection (with det M=−1).

Two-dimensional rotation: y x q y x q x' y' Two-dimensional reflection: y x a q/2 Example p128.3.

Read: Chapter 3: 7 Homework: 3.7.9,15,22,26. Due: April 6

March 30 Linear dependence and independence Linear dependence of vectors: A set of vectors are linearly dependent if some linear combination of them is zero, with not all the coefficients equal to zero. 1. If a set of vectors are linearly dependent, then at least one of the vectors can be written as a linear combination of others. 2. If a set of vectors are linearly dependent, then at least one row in the row-reduced matrix of these vectors equals to zero. The rank of the matrix is then less than the number of rows. Example: Any three vectors in the x-y plane are linearly dependent. E.g. (1,2), (3,4), (5,6). Linear independence of vectors: A set of vectors are linearly independent if any linear combination of them is not zero, with not all the coefficients equal to zero.

Linear dependence of functions: A set of functions are linearly dependent if some linear combination of them is always zero, with not all the coefficients equal to zero. Examples: Theorem: If the Wronskian of a set of functions then the functions are linearly independent .

Examples p133.1,2. Note: W=0 does not always imply the functions are linearly dependent. E.g., x2 and x|x| about x=0. However, when the functions are analytic (infinitely differentiable), which we meet often, W=0 implies linear dependence.

Homogeneous equations: 1. Homogeneous equations always have the trivial solution (all unknowns=0). 2. If Rank M=Number of unknowns, the trivial solution is the only solution. 3. If Rank M< Number of unknowns, there are infinitely many solutions. Theorem: A set of n homogeneous equations with n unknowns has nontrivial solutions if and only if the determinant of the coefficients is zero. Proof: 1. det M ≠0  r =M-10=0. 2. Only trivial solution exists  Columns of M are linearly independent det M ≠0. Examples p135.4.

Read: Chapter 3: 8 Homework: 3.8.7,10,13,17,24. Due: April 6

April 2 Special matrices 3.9 Special matrices and formulas Transpose matrix AT of A: Complex conjugate matrix A* of A: Adjoint (transpose conjugate) matrix A+ of A: Inverse matrix A-1 of A: A-1A= AA-1=1. Symmetric matrix: A=AT (A is real) Orthogonal matrix: A−1 =AT (A is real) Hermitian matrix: A =A+ Unitary matrix: A-1 =A+ Normal matrix: AA+=A+A, or [A,A+]=0.

Index notation for matrix multiplication: Kronecker d symbol: Exercises on index notations: Associative law for matrix multiplication: A(BC)=(AB)C Transpose of a product: (AB)T=BTAT Corollary: (ABC)T=CTBTAT

Corollary: (ABC)-1=C-1B-1A-1 Inverse of a product: (AB)-1=B-1A-1 Corollary: (ABC)-1=C-1B-1A-1 Trace of a matrix: Trace of a product: Tr(AB)=Tr(BA) Corollary: Tr(ABC)=Tr(BCA)=Tr(CAB)

Read: Chapter 3: 9 Homework: 3.9.2,4,5,23,24. Due: April 13

April 4 Linear vector spaces n-dimensional vectors: Linear vector space: Several vectors and the linear combinations of them form a space. Subspace: A plane is a subspace of 3-dimensional space. Span: A set of vectors spans the vector space if any vector in the space can be written as a linear combination of the spanning set. Basis: A set of linearly independent vectors that spans a vector space. Dimension of a vector space: The number of basis vectors that span the vector space. Examples p143.1; p144.2.

Inner product of two n-dimensional vectors: Length of an n-dimensional vector: Two n-dimensional vectors are orthogonal if Schwarz inequality:

Orthonormal basis: A set of vectors form an orthonormal basis if 1) they are mutually orthogonal and 2) each vector is normalized. Gram-Schmidt orthonormalization: Starting from n linearly independent vectors we can construct an orthonormal basis set Example p146.4.

Bra-ket notation of vectors: Complex Euclidean space: Inner product: Length: Orthogonal vectors: Schwarz inequality: Example p146.5.

Read: Chapter 3: 10 Homework: 3.10.1,10. Due: April 13

April 6, 9 Eigenvalues and eigenvectors 3.11 Eigenvalues and eigenvectors; Diagonalizing matrices Eigenvalues and eigenvectors: For a matrix M, if there is a nonzero vector r and a scalar l such that then r is called an eigenvector of M, and l is called the corresponding eigenvalue. M only changes the “length” of its eigenvector r by a factor of eigenvalue l, without affecting its “direction”. For nontrivial solutions of this homogeneous equation, we need This is called the secular equation, or characteristic equation.

Example: Calculate the eigenvalues and eigenvectors of

Similarity transformation: Let operator M actively change (rotate, stretch, etc.) a vector. The matrix representation of the operator depends on the choice of basis vectors: Let matrix C change the basis (coordinate transformation): Question: x x' y y' r (r') R (R') M (M') M′ =CMC-1 is called a similar transformation of M. M′ and M are called similar matrices. They are the same operator represented in different bases that are related by the transformation matrix C. That is: If r′ =Cr, then M′ = CMC-1. Theorem: A similarity transformation does not change the determinate or trace of a matrix: r r' C R' M' M R

Diagonalization of a matrix: Theorem: A matrix M may be diagonalized by a similarity transformation C-1MC=D , where C consists of the column eigenvectors of M, and the diagonalized matrix D consists of the corresponding eigenvalues. That is, the diagonization equation C-1MC=D just summarizes the eigenvalues and eigenvectors of M.

More about the diagonalization of a matrix C-1MC=D: (2×2 matrix as an example): D describes in the (x', y') system the same operation as M describes in the (x, y) system. The new x', y' axes are along the eigenvectors of M. The operation is more clear in the new system:

Diagonalization of Hermitian matrices: The eigenvalues of a Hermitian matrix are always real. The eigenvectors corresponding to different eigenvalues of a Hermitian matrix are orthogonal. {A matrix has real eigenvalues and can be diagonalized by a unitary similarity transformation}if and only if {it is Hermitian}. Example p155.2.

Corollary: {A matrix has real eigenvalues and can be diagonalized by an orthogonal similarity transformation} if and only if {it is symmetric}.

Read: Chapter 3: 11 Homework: 3.11.3,13,14,19,32,33,42. Due: April 20