Inner product Linear functional Adjoint

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

10.4 Complex Vector Spaces.
5.2 Rank of a Matrix. Set-up Recall block multiplication:
1 D. R. Wilton ECE Dept. ECE 6382 Introduction to Linear Vector Spaces Reference: D.G. Dudley, “Mathematical Foundations for Electromagnetic Theory,” IEEE.
8.4. Unitary Operators. Inner product preserving V, W inner product spaces over F in R or C. T:V -> W. T preserves inner products if (Ta|Tb) = (a|b) for.
Matrix Theory Background
Signal , Weight Vector Spaces and Linear Transformations
Signal , Weight Vector Spaces and Linear Transformations
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
Chapter 6 Eigenvalues.
MA5242 Wavelets Lecture 2 Euclidean and Unitary Spaces Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2 Singapore.
Orthogonality and Least Squares
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Math 307 Spring, 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
1 MAC 2103 Module 10 lnner Product Spaces I. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Define and find the.
राघव वर्मा Inner Product Spaces Physical properties of vectors  aka length and angles in case of arrows Lets use the dot product Length of Cosine of the.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
Signal-Space Analysis ENSC 428 – Spring 2008 Reference: Lecture 10 of Gallager.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Polynomials Algebra Polynomial ideals
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Chapter 5: The Orthogonality and Least Squares
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Chapter 5 Orthogonality.
Gram-Schmidt Orthogonalization
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Chapter 2: Vector spaces
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
AN ORTHOGONAL PROJECTION
Page 146 Chapter 3 True False Questions. 1. The image of a 3x4 matrix is a subspace of R 4 ? False. It is a subspace of R 3.
Groups Definition A group  G,  is a set G, closed under a binary operation , such that the following axioms are satisfied: 1)Associativity of  :
Quantum One: Lecture Representation Independent Properties of Linear Operators 3.
1 Chapter 5 – Orthogonality and Least Squares Outline 5.1 Orthonormal Bases and Orthogonal Projections 5.2 Gram-Schmidt Process and QR Factorization 5.3.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
DE Weak Form Linear System
Chapter 10 Real Inner Products and Least-Square
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Section 5.1 Length and Dot Product in ℝ n. Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The dot product.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Chapter 4 Hilbert Space. 4.1 Inner product space.
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Matrices and linear transformations For grade 1, undergraduate students For grade 1, undergraduate students Made by Department of Math.,Anqing Teachers.
Class 24: Question 1 Which of the following set of vectors is not an orthogonal set?
Chapter 4 Euclidean n-Space Linear Transformations from to Properties of Linear Transformations to Linear Transformations and Polynomials.
6. Elementary Canonical Forms How to characterize a transformation?
7.2. Cyclic decomposition and rational forms Cyclic decomposition Generalized Cayley-Hamilton Rational forms.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Section 6.2 Angles and Orthogonality in Inner Product Spaces.
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Postulates of Quantum Mechanics
Euclidean Inner Product on Rn
Orthogonality and Least Squares
Lecture 03: Linear Algebra
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
Chapter 3 Linear Algebra
Linear Algebra Lecture 39.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Algebra Lecture 41.
VI.3 Spectrum of compact operators
Presentation transcript:

Inner product Linear functional Adjoint 8.1. Inner Product Spaces Inner product Linear functional Adjoint

Assume F is a subfield of R or C. Let V be a v.s. over F. An inner product on V is a function VxV -> F i.e., a,b in V -> (a|b) in F s.t. (a) (a+b|r)=(a|r)+(b|r) (b)( ca|r)=c(a|r) (c ) (b|a)=(a|b)- (d) (a|a) >0 if a 0. Bilinear (nondegenerate) positive. A ways to measure angles and lengths.

Fn has a standard inner product. Examples: Fn has a standard inner product. ((x1,..,xn)|(y1,…,yn)) = If F is a subfield of R, then = x1y1+…+xnyn. A,B in Fnxn. (A|B) = tr(AB*)=tr(B*A) Bilinear property: easy to see. tr(AB*)=

(X|Y)=Y*Q*QX, where X,Y in Fnx1, Q nxn invertible matrix. Bilinearity follows easily (X|X)=X*Q*QX=(QX|QX)std 0. In fact almost all inner products are of this form. Linear T:V->W and W has an inner product, then V has “induced” inner product. pT(a|b):= (Ta|Tb).

(b) V={f:[0,1]->C| f is continuous }. (a) For any basis B={a1,…,an}, there is an inner-product s.t. (ai|aj)=ij. Define T:V->Fn s.t. ai -> ei. Then pT(ai|aj)=(ei|ej)= ij. (b) V={f:[0,1]->C| f is continuous }. (f|g)= 01 fg- dt for f,g in V is an inner product. T:V->V defined by f(t) -> tf(t) is linear. pT(f,g)= 01tftg- dt=01t2fg- dt is an inner product.

Polarization identity: Let F be an imaginary field in C. (a|b)=Re(a|b)+iRe(a|ib) (*): (a|b)=Re(a|b)+iIm(a|b). Use the identity Im(z)=Re(-iz). Im(a|b)=Re(-i(a|b))=Re(a|ib) Define ||a|| := (a|a)1/2 norm ||ab||2=||a||22Re(a|b)+||b||2 (**). (a|b)=||a+b||2/4-||a-b||2/4+i||a+ib||2/4 -i||a-ib||2/4. (proof by (*) and (**).) (a|b)= ||a+b||2/4-||a-b||2/4 if F is a real field.

When V is finite-dimensional, inner products can be classified. Given a basis B={a1,…,an} and any inner product ( | ): (a|b) = Y*GX for X=[a]B, Y=[b]B G is an nxn-matrix and G=G*, X*GX0 for any X, X0. Proof: (->) Let Gjk=(ak|aj).

G=G*: (aj|ak)=(ak|aj)-. Gkj=Gjk-. X*GX =(a|a) > 0 if X0. (G is invertible. GX0 by above for X0.) (<-) X*GY is an inner-product on Fnx1. (a|b) is an induced inner product by a linear transformation T sending ai to ei. Recall Cholesky decomposition: Hermitian positive definite matrix A = L L*. L lower triangular with real positive diagonal. (all these are useful in appl. Math.)

8.2. Inner product spaces Definition: An inner product space (V, ( | )) F⊂R -> Euclidean space F⊂C -> Unitary space. Theorem 1. V, ( | ). Inner product space. ||ca||=|c|||a||. ||a|| > 0 for a0. |(a|b)| ||a||||b|| (Cauchy-Schwarz) ||a+b|| ||a||+||b||

Proof (ii) Proof (iii)

In fact many inequalities follows from Cauchy-Schwarz inequality. The triangle inequality also follows. See Example 7. Example 7 (d) is useful in defining Hilbert spaces. Similar inequalities are used much in analysis, PDE, and so on. Note Example 7, no computations are involved in proving these.

On inner product spaces one can use the inner product to simplify many things occurring in vector spaces. Basis -> orthogonal basis. Projections -> orthogonal projections Complement -> orthogonal complement. Linear functions have adjoints Linear functionals become vector Operators -> orthogonal operators and self adjoint operators (we restrict to )

Orthogonal basis Definition: a,b in V, ab if (a|b)=0. The zero vector is orthogonal to every vector. An orthogonal set S is a set s.t. all pairs of distinct vectors are orthogonal. An orthonormal set S is an orthogonal set of unit vectors.

Proof: Let a1,…,am be the set. Theorem 2. An orthogonal set of nonzero-vectors is linearly independent. Proof: Let a1,…,am be the set. Let 0=b=c1a1+…+cmam. 0=(b,ak)=(c1a1+…+cmam, ak)=ck(ak |ak)  ck=0. Corollary. If b is a linear combination of orthogonal set a1,…,am of nonzero vectors, then b=∑k=1m ((b|ak)/||ak||2) ak Proof: See above equations for b0.

Gram-Schmidt orthogonalization: Theorem 3. b1,…,bn in V independent. Then one may construct orthogonal basis a1,…,an s.t. {a1,…,ak} is a basis for <b1,…,bk> for each k=1,..,n. Proof: a1 := b1. a2=b2-((b2|a1)/||a1||2)a1,…, Induction: {a1,..,am} constructed and is a basis for < b1,…,bm>. Define

See p.281, equation (8-10) for some examples. See examples 12 and 13. Then Use Theorem 2 to show that the result {a1,…,am+1} is independent and hence is a basis of <b1,…,bm+1>. See p.281, equation (8-10) for some examples. See examples 12 and 13.

Best approximation, Orthogonal complement, Orthogonal projections This is often used in applied mathematics needing approximations in many cases. Definition: W a subspace of V. b in W. Then the best approximation of b by a vector in W is a in W s.t. ||b-a||  ||b-c|| for all c in W. Existence and Uniqueness. (finite-dimensional case)

Theorem 4: W a subspace of V. b in V. (i). a is a best appr to b <-> b-a  c for all c in W. (ii). A best appr is unique (if it exists) (iii). W finite dimensional. {a1,..,ak} any orthonormal basis. is the best approx. to b by vectors in W.

Proof: (i) Fact: Let c in W. b-c =(b-a)+(a-c). ||b-c||2=||b-a||2+2Re(b-a|a-c)+||a-c||2(*) (<-) b-a W. If c a, then ||b-c||2=||b-a||+||a-c||2 > ||b-a||2. Hence a is the best appr. (->) ||b-c||||b-a|| for every c in W. By (*) 2Re(b-a|a-c)+||a-c||2 0 <-> 2Re(b-a|t)+||t||2 0 for every t in W. If ac, take t =

(ii) a,a’ best appr. to b in W. This holds <-> (b-a|a-c)=0 for any c in W. Thus, b-a is  every vector in W. (ii) a,a’ best appr. to b in W. b-a  every v in W. b-a’  every v in W. If aa’, then by (*) ||b-a’||2=||b-a||2+2Re(b-a|a-a’)+||a-a’||2. Hence, ||b-a’||>||b-a||. Conversely, ||b-a||>||b-a’||. This is a contradiction and a=a’.

(iii) Take inner product of ak with This is zero. Thus b-a  every vector in W.

Orthogonal projection Orthogonal complement. S a set in V. S :={v in V| vw for all w in S}. S is a subspace. V={0}. If S is a subspace, then V=S S and (S) =S. Proof: Use Gram-Schmidt orthogonalization to a basis {a1,…,ar,ar+1,…,an} of V where {a1,…,ar} is a basis of V.

Orthogonal projection: EW:V->W Orthogonal projection: EW:V->W. a in V -> b the best approximation in W. By Theorem 4, this is well-defined for any subspace W. EW is linear by Theorem 5. EW is a projection since EW EW(v)= EW(v).

Theorem 5: W subspace in V. E orthogonal projection V->W Theorem 5: W subspace in V. E orthogonal projection V->W. Then E is an projection and W=nullE and V=WW. Proof: Linearity: a,b in V, c in F. a-Ea, b-Eb  all v in W. c(a-Ea)+(b-Eb)=(ca+b)-(cE(a)+E(b))  all v in W. Thus by uniqueness E(ca+b)=cEa+Eb. null E  W : If b is in nullE, then b=b-Eb is in W. W  null E: If b is in W, then b-0 is in W and 0 is the best appr to b by Theorem 4(i) and so Eb=0. Since V=ImEnullE, we are done.

Proof: b-> b-Ewb is in W by Theorem 4 (i). Corollary: b-> b-Ewb is an orthogonal projection to W. I-Ew is an idempotent linear transformation; i.e., projection. Proof: b-> b-Ewb is in W by Theorem 4 (i). Let c be in W. b-c=Eb+(b-Eb-c). Eb in W, (b-Eb-c) in W. ||b-c||2=||Eb||2+||b-Eb-c||2||b-(b-Eb)||2 and > if c b-Eb. Thus, b-Eb is the best appr to b in W.

Bessel’s inequality {a1,…,an} orthogonal set of nonzero vectors. Then = <->