Section 2.3 Properties of Solution Sets

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

5.1 Real Vector Spaces.
5.2 Rank of a Matrix. Set-up Recall block multiplication:
Eigenvalues and Eigenvectors
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems
Linear Equations in Linear Algebra
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
1 MAC 2103 Module 10 lnner Product Spaces I. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Define and find the.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Math Dept, Faculty of Applied Science, HCM University of Technology
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Chapter 1 – Linear Equations
Linear Equations in Linear Algebra
Linear Algebra Lecture 25.
1 MAC 2103 Module 9 General Vector Spaces II. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Find the coordinate.
Linear Algebra Chapter 4 Vector Spaces.
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
1 Part II: Linear Algebra Chapter 8 Systems of Linear Algebraic Equations; Gauss Elimination 8.1 Introduction There are many applications in science and.
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
Simplex method (algebraic interpretation)
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
Section 2.2 Echelon Forms Goal: Develop systematic methods for the method of elimination that uses matrices for the solution of linear systems. The methods.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
January 22 Review questions. Math 307 Spring 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone.
1 1.5 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra SOLUTION SETS OF LINEAR SYSTEMS.
Chapter Content Real Vector Spaces Subspaces Linear Independence
Linear Equations in Linear Algebra
1 1.7 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra LINEAR INDEPENDENCE.
1 1.3 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra VECTOR EQUATIONS.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Elementary Linear Algebra Anton & Rorres, 9th Edition
1 Chapter 3 – Subspaces of R n and Their Dimension Outline 3.1 Image and Kernel of a Linear Transformation 3.2 Subspaces of R n ; Bases and Linear Independence.
5.5 Row Space, Column Space, and Nullspace
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
4.6: Rank. Definition: Let A be an mxn matrix. Then each row of A has n entries and can therefore be associated with a vector in The set of all linear.
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
Elementary Linear Algebra Anton & Rorres, 9th Edition
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Matrices, Vectors, Determinants.
3 The Vector Space R n 3.2 Vector space Properties of R n 3.3 Examples of Subspaces 3.4 Bases for Subspaces 3.5 Dimension 3.6 Orthogonal Bases for Subspaces.
REVIEW Linear Combinations Given vectors and given scalars
7.3 Linear Systems of Equations. Gauss Elimination
Eigenvalues and Eigenvectors
MAT 322: LINEAR ALGEBRA.
Eigenvalues and Eigenvectors
5 Systems of Linear Equations and Matrices
Section 4.1 Eigenvalues and Eigenvectors
Linear Equations in Linear Algebra
4.6: Rank.
1.3 Vector Equations.
Linear Equations in Linear Algebra
7.5 Solutions of Linear Systems:
Properties of Solution Sets
Vector Spaces RANK © 2012 Pearson Education, Inc..
Linear Equations in Linear Algebra
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Linear Equations in Linear Algebra
Eigenvalues and Eigenvectors
Presentation transcript:

Section 2.3 Properties of Solution Sets In this section we use the RREF to develop additional properties of the solution set of a linear system. The case of no solutions, an inconsistent system, requires nothing further. The case of a unique solution will provide information about the rows and columns of the original coefficient matrix. However, our primary focus will be on the nature of the solution set of linear systems that have at least one free variable. For such systems we develop a geometric view of the solution set. For these results keep in mind that row operations produce equivalent linear systems. That is, row operations do not change the solution set. Hence the information obtained from the RREF (or REF) applies to the original linear system and of course to its coefficient matrix.

Solution Set Structure. The linear system Ax = b which has augmented matrix  has a solution set consisting of all vectors in R5 of the form Geometrically the solution set of this system is a translation of When a specific vector v is added or subtracted from each vector in a set S, we say the set S is translated by vector v. by vector We will develop a corresponding algebraic way to view such a solution set.

The vectors associated with the free variables in the solution set  are . These vectors have a particular property that is important with regard to the set of all solutions of the associated homogeneous linear system Ax = 0. We first investigate this property and then introduce concepts which provide a characterization, or a way of considering, the set of all solutions of Ax = 0. We will show that the set of all solutions to Ax = 0 has a structure (a pattern of behavior) that is easy to understand and yields insight into the nature of the solution set of homogeneous linear systems. Ultimately this will provide a foundation for describing algebraically the solution set of the nonhomogeneous linear system Ax = b.

Example: The linear system Ax = b has augmented matrix The RREF of the augmented matrix is Determine the general solution and write is as a translation of a set of vectors. Using back substitution we have then

It is important that you master this material since it forms a core of ideas for further developments in linear algebra. Definition A set of vectors is called linearly independent provided the only way that a linear combination of the vectors produces the zero vector is when all the coefficients are zero. If there is some set of coefficients, not all zero, for which a linear combination of the vectors is the zero vector, then the set of vectors is called linearly dependent. That is, {v1, v2, ..., vk} is linearly dependent if we can find constants c1, c2, ..., ck not all zero such that The zero vector. The zero vector. Example: Determine which of the following sets are linearly independent. STRATEGY: Form a linear combination of these vector with arbitrary coefficients, set it equal to the zero vector. IF the only way this can be true is if the coefficients are all zero, then the set id linearly independent.

is linearly independent. Example: Show that the set Let’s go back to the linear system We showed that the set of all solutions is given by So the general solution is the translation of the span of linearly independent set.

When we use the RREF or REF of an augmented matrix to obtain the general solution as a translation of a linear combination of vectors with arbitrary coefficients those vectors are linearly independent. This result is awkward to prove in a general case so we do not pursue a formal proof here. We summarize this as follows: The solution set of a linear system with infinitely many solutions is the translation of the span of a set S of linearly independent vectors. The number of linearly independent vectors in S is the same as the number of free variables the solution set of the linear system. If the linear system is homogeneous and has infinitely many solutions then the solution set is just the span of a set S of linearly independent vectors, since the augmented column is all zeros and remains unchanged by the row operations that are used.

Example: Determine the solution set of the homogeneous linear system Ax = 0 when the RREF of [A | 0] is Note that the augmented matrix is in RREF, so we can immediately use back substitution to find the solution set. Extending the Example: Suppose we had the same coefficient matrix A, but a nonhomogeneous linear system whose the RREF is Find the solution set as the translation of a span of vectors..

We have the following general principle. If the linear system Ax = b has infinitely many solutions, then the solution set is the translation of the span of a set S of linearly independent vectors. We express this as x = xp + span{S}, where xp is a particular solution of the nonhomogeneous system Ax = b and the vectors in span{S} form the solution set of the associated homogeneous linear system Ax = 0. If we let xh represent an arbitrary vector in span{S} then the set of solutions is given by x = xp + xh . We use the term particular solution for xp since its entries are completely determined; that is, the entries do not depend on any of the arbitrary constants. It is important to keep in mind that xh represents one vector in the span of a set of vectors. Note that A(xp + xh) = Axp + Axh = b + 0 = b.

The solution set of a homogeneous linear system Ax = 0 arises so frequently that we refer to it as the solution space or as the null space of matrix A. In addition we use the notation ns(A) to denote this special set of vectors. The term null space refers to the set of vectors x such that multiplication by A produces the null or zero vector; that is, Ax = 0. Since every homogeneous linear system is consistent, ns(A) consists either of a single vector or infinitely many vectors. If Ax = 0 has a unique solution, then ns(A) = {0}; that is, the only solution is the zero vector. As we have shown, if Ax = 0 has infinitely many solutions, then ns(A) = span{S}, where S is the set of vectors obtained from the information contained in an equivalent linear system in upper triangular form, REF, or RREF. In addition, it was argued that these vectors are linearly independent by virtue of choosing the arbitrary constants for the free variables.

Example: Find the null space of matrix A. So the null space of A, ns(A), is

Thus ns(A) is closed. (Verify.) Additional Structural Properties Next we develop ideas that will assist us in determining the nature of the set ns(A) and other sets of vectors. span{T}, is a set which consists of all possible linear combinations of the members of T. We say span{T} is generated by the members of T. A set is closed, provided every linear combination of its members also belongs to the set. It follows that span{T} is a closed set since a linear combination of the members of span{T} is just another linear combination of the members of T. Thus ns(A) is closed. (Verify.)

Definition A subset of Rn ( Cn) that is closed is called a subspace of Rn (Cn). Observe that if A is an m  n real matrix then any solution of Ax = 0 is a vector in Rn. It then follows that ns(A) is a subspace of Rn. For the linear system Ax = 0, this means that any linear combination of solutions is another solution. But the nature of ns(A) is even simpler because of the way we have obtained the set S which generates ns(A); recall that ns(A) = span{S}. To develop the necessary ideas we next investigate the nature of linearly independent and linearly dependent sets of spanning vectors. CASE: We begin with a set T of two distinct vectors in Rn : T = {v1, v2}. We can show that T = {v1, v2} is a linearly dependent set if and only if one vector is a scalar multiple of the other. This is equivalent to the statement that T = {v1, v2} is a linearly independent set if and only if neither vector is a scalar multiple of the other.

CASE: One of the vectors in T = {v1, v2} is the zero vector. We can show that Any set containing the zero vector is linearly dependent. CASE: Let T = {v1, v2}. What can we say about the generators of span{T} if set T is linearly dependent? We can show that span{T} has a exactly one vector that generates it. General implication: If T is any linearly dependent set of vectors, then span{T} can be generated by fewer vectors from T. It also follows that if T is a linearly independent set of vectors, then span{T} cannot be generated by fewer vectors from T.

Situation for sets with more than 2 vectors: For a set T with more than two vectors, T = {v1, v2, v3, ..., vr}, essentially the same properties hold. We need only revise statements as follows. T is linearly dependent if and only if (at least) one of the vectors in T is a linear combination of the other vectors. If T is a linearly dependent set, then the subspace span{T} can be generated by using a subset of T. That is, the information given by the vectors of T is redundant since we can use fewer vectors and generate the same subspace. If T is a linearly independent set, then any subset of fewer vectors cannot generate the whole subspace span{T}. That is, a linearly independent spanning set contains the minimal amount of information to generate a subspace. To indicate this we introduce the following terminology.

Definition A linearly independent spanning set of a subspace is called a basis. Example: Let T = {v1, v2} be a linearly independent set in R2. Then T is a basis for R2. That is, any pair of linearly independent vectors in R2 is a basis for R2. Example: For a homogeneous linear system Ax = 0 with infinitely many solutions, our construction of the solution set S from either the RREF or REF produces a basis for the subspace ns(A). Other subspaces associated with a matrix. Definition For a matrix A the span of the rows of A is called the row space of A, denoted row(A), and the span of the columns of A is called the column space of A, denoted col(A). If A is m  n, then the members of row(A) are vectors in Rn and the members of col(A) are vectors in Rm. Since the span of any set is closed, row(A) is a subspace of Rn and col(A) is a subspace of Rm. Our goal now is to determine a basis for each of these subspaces associated with matrix A.

How to find a basis for the row space of matrix A. Row operations applied to A manipulate its rows and (except for interchanges) produce linear combinations of its rows. If matrix B is row equivalent to A, then row(B) is the same as row(A) since a row of B is a linear combination of rows of A It follows that row(A) = row(rref(A)). Suppose that rref(A) has k nonzero rows. These k rows have a leading 1 in different columns. When we form a linear combination of these rows with coefficients c1, c2, ..., ck each of the c's will appear by itself in an entry of the resulting vector. Equating the resulting vector to the zero (row) vector must then give c1 = c2 = ... = ck = 0. Hence the nonzero rows of rref(A) are linearly independent. Since row(rref(A)) = row(A), it follows that the nonzero rows of rref(A) form a basis for row(A). How to find a basis for the coulmn space of matrix A. To determine a basis for col(A) we first recall that the transpose converts columns to rows. Hence we have that col(A) = row(AT) and since row(AT) = row(rref(AT)) it follows that the nonzero columns of (rref(AT))T form a basis for col(A).

Example: Determine a basis for row(A), col(A), and ns(A) where Solve this homogeneous system.

Determining if a set of vector is linearly independent or linearly dependent. We can use the techniques for computing bases for row(A) and col(A) just developed to determine if a set T = {v1, v2, ..., vk} in Rn is linearly independent or linearly dependent. Form the linear combination of the vectors in set T:  Set the linear combination equal to the zero vector:  We want to values for the coefficients cj, j = 1, 2, ..., k. This equivalent to finding a solution of the homogeneous linear system where we write the vectors vj as columns: We determine whether there is a nontrivial solution or only the trivial solution by compute the RREF of the linear system to make this determination.

Example: Determine whether the set T is linearly independent or linearly dependent. Construct the coefficient matrix. Construct the augmented matrix and find its RREF.  Since the homogeneous system has only the trivial solution set T is linearly independent.

Example: Determine whether the set T is linearly independent or linearly dependent. Find the RREF of the augmented matrix of the homogeneous linear system Ac = 0. We see there are nontrivial solutions so set T is linearly dependent.

We next introduce two concepts which provide a count of the number of linearly independent rows of a matrix and a count of the largest number of linearly independent vectors that can be present in a subset of a subspace. Definition The rank of a matrix A, denoted rank(A), is the number of leading 1's in rref(A). Since the nonzero rows form a basis for the row space of A, the rank of A is a count of the number of linearly independent rows in matrix A. Definition The dimension of a subspace V of Rn, denoted dim(V), is the number of vectors in a basis for V. So rank(A) is the dimension of the row space of A. FACT: dim(row(A)) = dim(col(A)); that is, the dimension of the row space of A is the same as the dimension of the column space of A. The rank(A) compute the dimension of both the subspace row(A) and col(A). We state this without a proof.

Example: Compute the rank, dim(row(A)), dim(col(A)), and dim(ns(A)) for each of the following matrices. rref(A) 1 0 -1 0 1 2 0 0 0 >> rref(A) 1.0000 0 -0.3333 0 0 1.0000 0.6667 0 0 0 0 0 rref(A) 1.0000 0 0 0 0 0 1.0000 0 0.3077 0.6154 0 0 1.0000 0.4615 -0.0769