Lecture 13 Inner Product Space & Linear Transformation Last Time - Orthonormal Bases:Gram-Schmidt Process - Mathematical Models and Least Square Analysis - Inner Product Space Applications Elementary Linear Algebra R. Larsen et al. (5 Edition) TKUEE 翁慶昌 -NTUEE SCC_12_2007
13- 2 Lecture 12: Inner Product Spaces & L.T. Today Mathematical Models and Least Square Analysis Inner Product Space Applications Introduction to Linear Transformations Reading Assignment: Secs 5.4,5.5,6.1,6.2 Next Time The Kernel and Range of a Linear Transformation Matrices for Linear Transformations Transition Matrix and Similarity Reading Assignment: Secs
What Have You Actually Learned about Projection So Far? 13- 3
Mathematical Models and Least Squares Analysis Let W be a subspace of an inner product space V. (a) A vector u in V is said to orthogonal to W, if u is orthogonal to every vector in W. (b) The set of all vectors in V that are orthogonal to W is called the orthogonal complement of W. (read “ perp”) Orthogonal complement of W: Notes:
Direct sum: Let and be two subspaces of. If each vector can be uniquely written as a sum of a vector from and a vector from,, then is the direct sum of and, and you can write. Thm 5.13: (Properties of orthogonal subspaces) Let W be a subspace of R n. Then the following properties are true. (1) (2) (3)
Find by the other method:
13- 7 Thm 5.16: (Fundamental subspaces of a matrix) If A is an m×n matrix, then (1) (2) (3) (4)
Ex 6: (Fundamental subspaces) Find the four fundamental subspaces of the matrix. (reduced row-echelon form) Sol:
Check:
Ex 3: Let W is a subspace of R 4 and. (a) Find a basis for W (b) Find a basis for the orthogonal complement of W. Sol: (reduced row-echelon form)
is a basis for W Notes:
Least Squares Problem Least squares problem: (A system of linear equations) (1) When the system is consistent, we can use the Gaussian elimination with back-substitution to solve for x (2) When the system is consistent, how to find the “best possible” solution of the system. That is, the value of x for which the difference between Ax and b is small.
Least squares solution: Given a system Ax = b of m linear equations in n unknowns, the least squares problem is to find a vector x in R n that minimizes with respect to the Euclidean inner product on R n. Such a vector is called a least squares solution of Ax = b.
(the normal system associated with Ax = b)
Note: The problem of finding the least squares solution of is equal to he problem of finding an exact solution of the associated normal system. Thm: For any linear system, the associated normal system is consistent, and all solutions of the normal system are least squares solution of Ax = b. Moreover, if W is the column space of A, and x is any least squares solution of Ax = b, then the orthogonal projection of b on W is
Thm: If A is an m×n matrix with linearly independent column vectors, then for every m×1 matrix b, the linear system Ax = b has a unique least squares solution. This solution is given by Moreover, if W is the column space of A, then the orthogonal projection of b on W is
Ex 7: (Solving the normal equations) Find the least squares solution of the following system and find the orthogonal projection of b on the column space of A.
Sol: the associated normal system
the least squares solution of Ax = b the orthogonal projection of b on the column space of A
Keywords in Section 5.4: orthogonal to W: 正交於 W orthogonal complement: 正交補集 direct sum: 直和 projection onto a subspace: 在子空間的投影 fundamental subspaces: 基本子空間 least squares problem: 最小平方問題 normal equations: 一般方程式
Application: Cross Product Cross product (vector product) of two vectors 向量 (vector) 方向 : use right-hand rule The cross product is not commutative: The cross product is distributive:
Parallelogram representation of the vector product x y θ Bsinθ Area Application: Cross Product
向量之三重純量積 Triple Scalar product The dot and the cross may be interchanged : 純量 (scalar)
向量之三重純量積 Parallelepiped representation of triple scalar product x y z Volume of parallelepiped defined by,, and
Fourier Approximation
Fourier Approximation The Fourier series transforms a given periodic function into a superposition of sine and cosine waves The following equations are used
Today Mathematical Models and Least Square Analysis (Cont.) Inner Product Space Applications Introduction to Linear Transformations The Kernel and Range of a Linear Transformation
6.1 Introduction to Linear Transformations Function T that maps a vector space V into a vector space W: V: the domain of T W: the codomain of T
Image of v under T: If v is in V and w is in W such that Then w is called the image of v under T. the range of T: The set of all images of vectors in V. the preimage of w: The set of all v in V such that T(v)=w
Ex 1: (A function from R 2 into R 2 ) (a) Find the image of v=(-1,2). (b) Find the preimage of w=(-1,11) Sol: Thus {(3, 4)} is the preimage of w=(-1, 11)
Linear Transformation (L.T.):
Notes: (1) A linear transformation is said to be operation preserving. Addition in V Addition in W Scalar multiplication in V Scalar multiplication in W (2) A linear transformation from a vector space into itself is called a linear operator
Ex 2: (Verifying a linear transformation T from R 2 into R 2 ) Pf:
Therefore, T is a linear transformation
Ex 3: (Functions that are not linear transformations)
Notes: Two uses of the term “linear”. (1) is called a linear function because its graph is a line. (2) is not a linear transformation from a vector space R into R because it preserves neither vector addition nor scalar multiplication
Zero transformation: Identity transformation: Thm 6.1: (Properties of linear transformations)
Ex 4: (Linear transformations and bases) Let be a linear transformation such that Sol: (T is a L.T.) Find T(2, 3, -2)
Ex 5: (A linear transformation defined by a matrix) The function is defined as Sol: (vector addition) (scalar multiplication)
Thm 6.2: (The linear transformation given by a matrix) Let A be an m n matrix. The function T defined by is a linear transformation from R n into R m. Note:
Show that the L.T. given by the matrix has the property that it rotates every vector in R 2 counterclockwise about the origin through the angle . Ex 7: (Rotation in the plane) Sol: (polar coordinates) r : the length of v : the angle from the positive x-axis counterclockwise to the vector v
r : the length of T(v) + : the angle from the positive x-axis counter- clockwise to the vector T(v) Thus, T(v) is the vector that results from rotating the vector v counterclockwise through the angle .
is called a projection in R 3. Ex 8: (A projection in R 3 ) The linear transformation is given by
Show that T is a linear transformation. Ex 9: (A linear transformation from M m n into M n m ) Sol: Therefore, T is a linear transformation from M m n into M n m
Keywords in Section 6.1: function: 函數 domain: 論域 codomain: 對應論域 image of v under T: 在 T 映射下 v 的像 range of T: T 的值域 preimage of w: w 的反像 linear transformation: 線性轉換 linear operator: 線性運算子 zero transformation: 零轉換 identity transformation: 相等轉換
Today Mathematical Models and Least Square Analysis (Cont.) Inner Product Space Applications Introduction to Linear Transformations The Kernel and Range of a Linear Transformation
6.2 The Kernel and Range of a Linear Transformation Kernel of a linear transformation T: Let be a linear transformation Then the set of all vectors v in V that satisfy is called the kernel of T and is denoted by ker(T). Ex 1: (Finding the kernel of a linear transformation) Sol:
Ex 2: (The kernel of the zero and identity transformations) (a) T(v)=0 (the zero transformation ) (b) T(v)=v (the identity transformation ) Ex 3: (Finding the kernel of a linear transformation) Sol: