# राघव वर्मा Inner Product Spaces Physical properties of vectors  aka length and angles in case of arrows Lets use the dot product Length of Cosine of the.

## Presentation on theme: "राघव वर्मा Inner Product Spaces Physical properties of vectors  aka length and angles in case of arrows Lets use the dot product Length of Cosine of the."— Presentation transcript:

राघव वर्मा Inner Product Spaces Physical properties of vectors  aka length and angles in case of arrows Lets use the dot product Length of Cosine of the angle between How can we use the dot product to define length & angle when the dot product uses length and angle Main features of the dot product are

राघव वर्मा Linearity Explained Linearity requires that P j + P k = P jk Why do we require Linearity? How does this condition ensure linearity? PjPj PkPk

राघव वर्मा Inner Product Space for Quantum Mechanics Inner product in our special space will be denoted by  V|W  A vector space with an inner product is called an inner product space –No explicit rule for evaluating the scalar product –The first axiom is sensitive to the order of 2 factors. This is to make  V|V  real. –Second  Poistive Semidefiniteness- vanishing only if the vector does. It better be because from our generalization we are going to use it to define length –Linearity of inner product when a linear superposition of a|W  + b|Z   |aW + bZ  appears as a second vector in the scalar product

राघव वर्मा What if the first factor in the product is a linear superposition Expresses anti-linearity of the inner product with respect to the first factor in the inner product The inner product of a linear superposition with another vector is the corresponding superposition of inner product if the superposition occurs in the second factor. While the superposition with all its coefficients conjugated if the superposition occurs in the first factor. –This assymetry is going to stay with us…. Asymmetry of our space

राघव वर्मा Some more properties of Inner Product Two vectors are orthogonal if the inner product vanishes The norm of the vector in my vector space is A set of basis vectors, pairwise orthogonal will be called an orthonormal basis If we use orthonormal basis only the diagonal terms survive. We have defined all the properties of inner product but have not specified how to compute it. We said that the basis is orthonormal if

राघव वर्मा The double sum Then collapses to So now you can appreciate why we defined If it was not defined this way then forget about the norm being positive definite it would not have been real We have already said that the vector is uniquely defined by its components. Let me denote it by an ordered set of numbers and let me use a column to represent its components i.e. Computing Inner Product &

राघव वर्मा The inner product in our space has been defined using the dot product as reference. The inner product also defines the norm –We need a number –To get a number from a column matrix we need to multiply it with a row matrix Hence if |V  is a column matrix Then if  W| is a represented by a row matrix The inner product of  V|W  is represented by Which is then Inner Product (contd..)

राघव वर्मा Dual Space and the Dirac Notation The column matrix associated with a vector in my space is unique I can get a scalar from inner product if I multiply a column vector by a row vector (its transpose conjugate) Since the column vector is unique  Hence the row vector by definition also becomes unique This row vector is called a bra in Dirac’s notation Thus there are two vector spaces, the space of kets & a dual space of bras There is a ket for every bra and vice-versa Inner product is defined only between bras and kets  Hence from elements of two distinct but related spaces.

राघव वर्मा Dual Space (contd…) In my n dimensional vector space there exists basis vectors |i  Similarly in the dual of this vector space there exist basis vector  i| In an orthonormal basis |i  has all zeros and a 1 at the i th row A vector can now be expanded in my orthonormal basis A vector in my space can then also be expressed as

राघव वर्मा Adjoint Operation –If  V| is the bra corresponding to the ket |V  what bra corresponds to a |V  where a is some scalar –Relation between bras and kets in linear equation –To take the adjoint of a linear equation relating kets (bras) replace every ket (bra) by its bra (ket) & complex conjugate all its coefficients Dual Space (Contd..)

राघव वर्मा Orthonormal Basis Why do we need an orthonormal basis? Our requirement for a basis in an n dimensional space is just n linearly independent vectors Lets see in three dimensional space If the unit vectors u v & w are orthonormal, the dot product reduces to three terms. If the basis is only orthogonal then Moral of the story is that we need orthonormal basis

राघव वर्मा Given a set of n linearly independent vectors we want to construct an orthonormal basis Given n linearly independent but not orthonormal vectors |I , |II , |III ,… We want to construct a set of orthonormal vectors |1 , |2 , |3  … Rescale the first vector by its length. This forms the first unit vector Remove the component of |II  along |1  to form |2  –Normalise |2  Remove the component of |III  along |1  & |2  to form |3  –Normalise |3  Gram Schmidt Orthonormalisation

राघव वर्मा Example of Schmidt Orthonormalisation Construct an orthonormal set of vectors from the vectors

राघव वर्माSubspaces Given a vector space V a subset of its elements that form a vector space among themselves is called a subspace. We will denote a particular subspace i of dimensionality n i by V i n i Given two subspaces V i n i & V j m j we define their sum –V i n i  V j m j = V k m k This set contains all elements of V j m j –All elements of V i n i –All possible linear combinations of above

राघव वर्मा Linear Operators An operator  is an instruction for transforming any given vector |V  into another vector |V  We have seen that one can multiply a vector by a scalar  just stretches the vector An action of an operator on a vector, in general, transforms the vector Our Stern Gerlach Experiment transformed the original vector, the spin state of the silver atom which was oriented in all possible directions in the direction of the inhomogeneous magnetic field. We will restrict ourselves to operators which will not take the ket on which they act upon outside the vector space under consideration –If |V  is a vector in my Vector Space then so is  |V  = |V 

राघव वर्मा Linear Operators Operators can act on bras as well –  V|  =  V| Linear Operators –  |V i  =  |V i  –  {  |V i  +  |V j  =  |V i  +  |V j  –  V i |    V i |  –(  V i |  +  V j |  )    V i |  +  V j |  Identity Operator I leaves the vector alone Operator on V 3 ® –R(  /2 i)  Rotate all vectors by an angle  /2 about the x axis |1  |2  |3  |2  + |3  R(|2  + |3  )

राघव वर्मा Operator R

Download ppt "राघव वर्मा Inner Product Spaces Physical properties of vectors  aka length and angles in case of arrows Lets use the dot product Length of Cosine of the."

Similar presentations