A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

Rules of Matrix Arithmetic
5.1 Real Vector Spaces.
Chapter 4 Euclidean Vector Spaces
6.4 Best Approximation; Least Squares
8.3 Inverse Linear Transformations
Ch 7.7: Fundamental Matrices
Chapter 6 Eigenvalues and Eigenvectors
1.5 Elementary Matrices and a Method for Finding
Review on Linear Algebra
1.5 Elementary Matrices and a Method for Finding
1 1.8 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra INTRODUCTION TO LINEAR TRANSFORMATIONS.
Linear Transformations
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Orthogonality and Least Squares
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
1 © 2012 Pearson Education, Inc. Matrix Algebra THE INVERSE OF A MATRIX.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
5.1 Orthogonality.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Functions Definition A function from a set S to a set T is a rule that assigns to each element of S a unique element of T. We write f : S → T. Let S =
Chapter 4 Chapter Content
Systems of Linear Equation and Matrices
Chapter 5 Orthogonality.
Linear Algebra Chapter 4 Vector Spaces.
Chapter 2 – Linear Transformations
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
Elementary Linear Algebra Anton & Rorres, 9th Edition
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
Sections 1.8/1.9: Linear Transformations
1 MAC 2103 Module 6 Euclidean Vector Spaces I. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Use vector notation.
Chapter 5 General Vector Spaces.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
Chapter Content Real Vector Spaces Subspaces Linear Independence
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chap. 6 Linear Transformations
FUNCTIONS.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Section 4.2 Linear Transformations from R n to R m.
1 MAC 2103 Module 7 Euclidean Vector Spaces II. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Determine if a linear.
Introductions to Linear Transformations Function T that maps a vector space V into a vector space W: V: the domain of T W: the codomain of T Chapter.
Chapter 4 Linear Transformations 4.1 Introduction to Linear Transformations 4.2 The Kernel and Range of a Linear Transformation 4.3 Matrices for Linear.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Transformations
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
FUNCTIONS COSC-1321 Discrete Structures 1. Function. Definition Let X and Y be sets. A function f from X to Y is a relation from X to Y with the property.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Chapter 4 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis 5.Dimension 6. Row Space, Column Space, and Nullspace 8.Rank.
Section 4.3 Properties of Linear Transformations from R n to R m.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Chapter 4 Linear Transformations 4.1 Introduction to Linear Transformations 4.2 The Kernel and Range of a Linear Transformation 4.3 Matrices for Linear.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Algebra Linear Transformations. 2 Real Valued Functions Formula Example Description Function from R to R Function from to R Function from to R.
Systems of First Order Linear Equations
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Linear Algebra Lecture 37.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Algebra Lecture 9.
4.7 Row Space, Column Space, and Null Space
Presentation transcript:

A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element a, then we write b = f(a) and say that b is the image of a under f or that f(a) is the value of f at a. The set A is called the domain of f and the set B is called the codomain of f. The subset of B consisting of all possible values for f as a varies over A is called the range of f. 4.2 Linear Transformations from to Functions from to R

If the domain of a function f is and the codomain is, then f is called a map or transformation from to. We say that the function f maps Into, and denoted by f : →. If m = n the transformation f : → is called an operator on. Suppose f 1, f 2, …, f m are real-valued functions of n real variables, say w 1 = f 1 (x 1,x 2,…,x n ) w 2 = f 2 (x 1,x 2,…,x n ) … w m = f m (x 1,x 2,…,x n ) These m equations assign a unique point (w 1,w 2,…,w m ) in to each point (x 1,x 2,…,x n ) in and thus define a transformation from to. If we denote this transformation by T: → then T (x 1,x 2,…,x n ) = (w 1,w 2,…,w m ) Function from to

Example: The equations Defines a transformation With this transformation, the image of the point (x 1, x 2 ) is Thus, for example, T(1, -2)=(-1, -6, -3)

A linear transformation (or a linear operator if m = n) T: → is defined by equations of the form or w = Ax The matrix A = [aij] is called the standard matrix for the linear transformation T, and T is called multiplication by A. Linear Transformations from to

Example: If the linear transformation is defined by the equations Solution: T can be expressed as So the standard matrix for T is Find the standard matrix for T, and calculate

Furthermore, Or

Notations: If it is important to emphasize that A is the standard matrix for T. We denote the linear transformation T: → by T A : →. Thus, T A (x) = Ax We can also denote the standard matrix for T by the symbol [T], or T(x) = [T]x Remark: We have establish a correspondence between m×n matrices and linear transformations from to : To each matrix A there corresponds a linear transformation T A (multiplication by A), and to each linear transformation T: →, there corresponds an m×n matrix [T] (the standard matrix for T). Remarks

Zero Transformation from to If 0 is the m×n zero matrix and 0 is the zero vector in, then for every vector x in T 0 (x) = 0x = 0 So multiplication by zero maps every vector in into the zero vector in.. We call T 0 the zero transformation from to. Identity operator on If I is the n×n identity, then for every vector in T I (x) = Ix = x So multiplication by I maps every vector in into itself. We call T I the identity operator on. Examples

In general, operators on and that map each vector into its symmetric image about some line or plane are called reflection operators. Such operators are linear. Reflection Operators

In general, a projection operator (or more precisely an orthogonal projection operator) on or is any operator that maps each vector into its orthogonal projection on a line or plane through the origin. The projection operators are linear. Projection Operators

An operator that rotate each vector in through a fixed angle θ is called a rotation operator on. Rotation Operators OperatorIllustrationEquationsStandard Matrix Rotation through an angle

Example: Use matrix multiplication to find the image of the vector (1, 1) when it is rotated through an angle of 30 degree ( ) Solution: the image of the vector is So

If k is a nonnegative scalar, the operator on or is called a contraction with factor k if 0 ≤ k ≤ 1 and a dilation with factor k if k ≥ 1. Dilation and Contraction Operators OperatorIllustratorEquationsStandard Matrix Contraction with factor k on (0 ≤ k ≤ 1 ) Dilation with factor k on (k ≥ 1 )

If T A : → and T B : → are linear transformations, then for each x in one can first compute T A (x), which is a vector in, and then one can compute T B (T A (x)), which is a vector in. Thus, the application of T A followed by T B produces a transformation from to. This transformation is called the composition of T B with T A and is denoted by. Thus The composition is linear since The standard matrix for is BA. That is, Compositions of Linear Transformations

Remark: captures an important idea: Multiplying matrices is equivalent to composing the corresponding linear transformations in the right-to-left order of the factors. Alternatively, If are linear transformations, then because the standard matrix for the composition is the product of the standard matrices of T 2 and T 1, we have

Example: Find the standard matrix for the linear operator that first reflects A vector about the y-axis, then reflects the resulting vector about the x-axis. Solution: The linear transformation T can be expressed as the composition Where T 1 is the reflection about the y-axis, and T 2 is the reflection about The x-axis. Sine the standard matrix for T is Which is called the reflection about the origin.

Note: the composition is NOT commutative. Example: Let be the reflection operator about the line y=x, and let be the orthogonal projection on the y-axis. Then Thus, have different effects on a vector x.

One-to-One Linear transformations Definition A linear transformation T : is said to be one-to-one if T maps distinct vectors (points) in into distinct vectors (points) in Remark: That is, for each vector w in the range of a one-to-one linear transformation T, there is exactly one vector x such that T(x) = w. 4.3 Properties of Linear Transformations form to Example: The linear operator T: that rotates each vector through an angle is a one-to-one linear transformation. In contrast, if T: is the orthogonal projection on the x-axis, then it’s not a one-to-one linear transformation.

Theorem (Equivalent Statements) If A is an n×n matrix and T A : is multiplication by A, then the following statements are equivalent. A is invertible The range of T A is T A is one-to-one

Example The rotation operator T : that rotates each vector through an angle is one-to-one. The standard matrix for T is which is invertible since Example If T: is the orthogonal projection on the x-axis, then it’s not one-to-one. The standard matrix for T is Which is not invertible since det[T] = 0

Inverse of a One-to-One Linear Operator Suppose T A : is a one-to-one linear operator ⇒ The matrix A is invertible. ⇒ T A-1 : is itself a linear operator; it is called the inverse of T A. ⇒ ⇒ If w is the image of x under T A, then T A-1 maps w back into x, since When a one-to-one linear operator on is written as T :, then the inverse of the operator T is denoted by. Thus, by the standard matrix, we have

Example Show that the linear operator T : defined by the equations w 1 = 2x 1 + x 2 w 2 = 3x 1 + 4x 2 is one-to-one, and find Solution: The matrix form of these equations is So the standard matrix for T is This matrix is invertible and the standard matrix for is

Thus from which we conclude that

Linearity Properties Theorem (Properties of Linear Transformations) A transformation T : is linear if and only if the following relationships hold for all vectors u and v in and every scalar c. T(u + v) = T(u) + T(v) T(cu) = cT(u) Example: Determine whether T: is a linear operator if T(x, y)=(x, 3y). Solution: So T(x, y)= (x, 3y) is a linear operator.

Theorem If T : is a linear transformation, and e 1, e 2, …, e n are the standard basis vectors for, then the standard matrix for T is A = [T] = [T(e 1 ) | T(e 2 ) | … | T(e n )] We call the vectors e 1, e 2, …, e n be the standard basis vectors for if In particular, In and the standard basis vectors are the vectors of length 1 Along the coordinate axes.

Example: Find the standard matrix for T: from the images of the standard Basis vectors if T dilates a vector by a factor of 2, then reflects that vector about the line y=x, and then projects that vector orthogonally onto x-axis. Solution: Here Thus

Eigenvalue and Eigenvector Definition If T: is a linear operator, then a scalar λ is called an eigenvalue of T if there is a nonzero x in such that T(x) = λx. Those nonzero vectors x that satisfy this equation are called the eigenvectors of T corresponding to λ Remarks: If A is the standard matrix for T, then the equation becomes Ax = λx The eigenvalues of T are precisely the eigenvalues of its standard matrix A x is an eigenvector of T corresponding to λ if and only if x is an eigenvector of A corresponding to λ

In, this means that multiplication by A maps each eigenvector x into a Vector x intro a vector that lies on the same line as x. If λ is an eigenvalue of A and x is a corresponding eigenvector, then Ax = λx, so multiplication by A maps x into a scalar multiple of itself x x

Example: Let T: be the reflection about the y-axis. Find the eigenvalues and corresponding eigenvectors of T. Check your calculations By calculating the eigenvalues and corresponding eigenvectors from the standard matrix for T. Solution: This transformation maps vectors on the x-axis to their negatives, vectors on the y-axis into themselves, and maps no other vectors into scalar multiples of themselves. Thus the eigenvalues are λ = ±1 and the eigenvectors are vectors (x, y) with either x = 0 or y = 0, but not both. To verify this, we observe that since e 1 → -e 1 and e 2 → e 2, the standard matrix for the transformation is. Hence the characteristic equation is

or with solutions λ = ±1. If (x, y) is an eigenvector corresponding to λ = 1, Then or x = 0, so the vector must lie on the y-axis. If (x, y) is an eigenvector corresponding to λ = –1, then or y = 0, so the vector must lie on the x-axis.

Theorem (Equivalent Statements) If A is an n×n matrix, and if T A : is multiplication by A, then the following are equivalent. A is invertible Ax = 0 has only the trivial solution The reduced row-echelon form of A is I n A is expressible as a product of elementary matrices Ax = b is consistent for every n×1 matrix b Ax = b has exactly one solution for every n×1 matrix b det(A) ≠ 0 The range of T A is R n T A is one-to-one