Presentation is loading. Please wait.

Presentation is loading. Please wait.

5.1 Orthogonality.

Similar presentations


Presentation on theme: "5.1 Orthogonality."— Presentation transcript:

1 5.1 Orthogonality

2 Definitions A set of vectors is called an orthogonal set if all pairs of distinct vectors in the set are orthogonal. An orthonormal set is an orthogonal set of unit vectors. An orthogonal (orthonormal) basis for a subspace W of Rn is a basis for W that is an orthogonal (orthonormal) set. An orthogonal matrix is a square matrix whose columns form an orthonormal set.

3 Examples 1) Is the following set of vectors orthogonal? orthonormal?
2) Find an orthogonal basis and an orthonormal basis for the subspace W of Rn

4 Theorems All vectors in an orthogonal set are linearly independent.
Let {v1, v2,…, vk } be an orthogonal basis for a subspace W of Rn and w be any vector in W. Then the unique scalars c1 ,c2 , …, ck such that w = c1v1 + c2v2 + …+ ckvk are given by Proof: To find ci we take the dot product with vi w vi = (c1v1 + c2v2 + …+ ckvk ) vi

5 Examples 3) The orthogonal basis for the subspace W in previous example is Pick a vector in W and express it in terms of the vectors in the basis. 4) Is the following matrix orthogonal? If it is orthogonal, find its inverse and its transpose.

6 Theorems on Orthogonal Matrix
The following statements are equivalent for a matrix A : A is orthogonal A-1 = AT ||Av|| = ||v|| for every v in Rn Av1∙ Av2 = v1∙ v2 for every v1 ,v2 in Rn Let A be an orthogonal matrix. Then its rows form an orthonormal set. A-1 is also orthogonal. |det(A)| = 1 |λ| = 1 where λ is an eigenvalue of A If A and B are orthogonal matrices, then so is AB

7 Orthogonal Complements and Orthogonal Projections
5.2 Orthogonal Complements and Orthogonal Projections

8 Orthogonal Complements
Recall: A normal vector n to a plane is orthogonal to every vector in that plane. If the plane passes through the origin, then it is a subspace W of R3 . Also, span(n) is also a subspace of R3 Note that every vector in span(n) is orthogonal to every vector in subspace W . Then span(n) is called orthogonal complement of W. Definition: A vector v is said to be orthogonal to a subspace W of Rn if it is orthogonal to all vectors in W. The set of all vectors that are orthogonal to W is called the orthogonal complement of W, denoted W ┴ . That is W perp

9 Example 1) Find the orthogonal complements for W of R3 .

10 Theorems Let W be a subspace of Rn . W ┴ is a subspace of Rn .
(W ┴)┴ = W W ∩ W ┴ = {0} If W = span(w1,w2,…,wk), then v is in W ┴ iff v∙wi = 0 for all i =1,…,k. Let A be an m x n matrix. Then (row(A))┴ = null(A) and (col(A))┴ = null(AT) Proof?

11 Example 2) Use previous theorem to find the orthogonal complements
for W of R3 .

12 Orthogonal Projections
u w2 w1 v Let u and v be nonzero vectors. w1 is called the vector component of u along v (or projection of u onto v), and is denoted by projvu w2 is called the vector component of u orthogonal to v

13 Orthogonal Projections
Let W be a subspace of Rn with an orthogonal basis {u1, u2,…, uk }, the orthogonal projection of v onto W is defined as: projW v = proju1 v + proju2 v + … + projuk v The component of v orthogonal to W is the vector perpW v = v – projw v Let W be a subspace of Rn and v be any vector in Rn . Then there are unique vectors w1 in W and w2 in W ┴ such that v = w1 + w2 .

14 Examples 3) Find the orthogonal projection of v = [ 1, -1, 2 ] onto W and the component of v orthogonal to W.

15 The Gram-Schmidt Process And the QR Factorization
5.3 The Gram-Schmidt Process And the QR Factorization

16 The Gram-Schmidt Process
Goal: To construct an orthogonal (orthonormal) basis for any subspace of Rn. We start with any basis {x1, x2,…, xk }, and “orthogonalize” each vector vi in the basis one at a time by finding the component of vi orthogonal to W = span(x1, x2,…, xi-1 ). Let {x1, x2,…, xk } be a basis for a subspace W. Then choose the following vectors: v1 = x1, v2 = x2 – projv1 x2 v3 = x3 – projv1 x3 – projv2 x3 … and so on Then {v1, v2,…, vk } is orthogonal basis for W . We can normalize each vector in the basis to form an orthonormal basis.

17 Examples 1) Use the following basis to find an orthonormal basis for R2 2) Find an orthogonal basis for R3 that contains the vector

18 The QR Factorization If A is an m x n matrix with linearly independent columns, then A can be factored as A = QR where R is an invertible upper triangular matrix and Q is an m x n orthogonal matrix. In fact columns of Q form orthonormal basis for Rn which can be constructed from columns of A by using Gram-Schmidt process. Note: Since Q is orthogonal, Q-1 = QT and we have R = QT A

19 Examples 3) Find a QR factorization for the following matrices.

20 Orthogonal Diagonalization
5.4 Orthogonal Diagonalization of Symmetric Matrices

21 Example 1) Diagonalize the matrix. Recall:
A square matrix A is symmetric if AT = A. A square matrix A is diagonalizable if there exists a matrix P and a diagonal matrix D such that P-1AP = D.

22 Orthogonal Diagonalization
Definition: A square matrix A is orthogonally diagonalizable if there exists an orthogonal matrix Q and a diagonal matrix D such that Q-1AQ = D. Note that Q-1 = QT

23 Theorems If A is orthogonally diagonalizable, then A is symmetric.
If A is a real symmetric matrix, then the eigenvalues of A are real. If A is a symmetric matrix, then any two eigenvectors corresponding to distinct eigenvalues of A are orthogonal. A square matrix A is orthogonally diagonalizable if and only if it is symmetric.

24 Example 2) Orthogonally diagonalize the matrix
and write A in terms of matrices Q and D.

25 Theorem If A is orthogonally diagonalizable, and QTAQ = D then A can written as where qi is the orthonormal column of Q, and λi is the corresponding eigenvalue. This fact will help us construct the matrix A given eigenvalues and orthogonal eigenvectors.

26 Example 3) Find a 2 x 2 matrix that has eigenvalues 2 and 7, with
corresponding eigenvectors

27 5.5 Applications

28 Quadratic Forms A quadratic form in x and y :
A quadratic form in x,y and z: where x is the variable (column) matrix.

29 Quadratic Forms A quadratic form in n variables is a function
f : Rn  R of the form: where A is a symmetric n x n matrix and x is in Rn A is called the matrix associated with f.

30 The Principal Axes Theorem
Every quadratic form can be diagonalized. In fact, if A is a symmetric n x n matrix and if Q is an orthogonal matrix so that QTAQ = D then the change of variable x = Qy transforms the quadratic form into Example: Find a change of variable that transforms the Quadratic into one with no cross-product terms.


Download ppt "5.1 Orthogonality."

Similar presentations


Ads by Google