Presentation is loading. Please wait.

Presentation is loading. Please wait.

Quantum One: Lecture 8. Continuously Indexed Basis Sets.

Similar presentations


Presentation on theme: "Quantum One: Lecture 8. Continuously Indexed Basis Sets."— Presentation transcript:

1 Quantum One: Lecture 8

2

3 Continuously Indexed Basis Sets

4 In the last lecture, we began to describe a more general formulation of quantum mechanics, applicable to arbitrary quantum systems, which develops the basic postulates in a form that is designed to be representation independent. We began by stating the first postulate, which associated the dynamical state of a quantum system with a state vector |ψ 〉 that is an element of a complex linear vector space S. We then gave a definition of the term linear vector space, and saw that it defines a set of objects that we can multiply by scalars and add together, to obtain other elements of the set. That is, they obey a superposition principle. We then introduced a series of additional definitions, that included the idea of spanning sets, linearly independent sets, and basis sets, and we defined what we mean by the dimension of a linear vector space.

5 In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But it often arises that a set of vectors {|φ_{α} 〉 } is labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index.

6 In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But it often arises that a set of vectors {|φ_{α} 〉 } is labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index.

7 In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But we often encounter sets of vectors labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index.

8 In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But we often encounter sets of vectors labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index.

9 In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But we often encounter sets of vectors labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index. This gives rise to the following set of definitions

10 In this lecture we continue our exploration of the mathematical properties of the state spaces of quantum mechanical systems. To this end we note that our previous definitions were expressed using a notation that was strictly applicable only to countable sets of states labeled by a discrete index. But we often encounter sets of vectors labeled by a continuous index α. Examples from functional linear vector spaces include the plane waves and the delta functions. We therefore need to extend our definitions presented for discrete sets so that we can apply the same concepts to sets of vectors labeled by a continuous index. This gives rise to the following set of definitions

11 Span - A continuously indexed set of vectors is said to span a vector space S if every vector |ψ 〉 in S can be written as a continuous linear combination of the elements of the set. In this expression the function gives the complex value of the expansion coefficient of multiplying the state of the spanning set. Linear Independence - A continuously indexed set of vectors is linearly independent if the only solution to the equation is for all α.

12 Span - A continuously indexed set of vectors is said to span a vector space S if every vector |ψ 〉 in S can be written as a continuous linear combination of the elements of the set. In this expression the function gives the complex value of the expansion coefficient of multiplying the state of the spanning set. Linear Independence - A continuously indexed set of vectors is linearly independent if the only solution to the equation is for all α.

13 Basis - A linearly independent set of continuously indexed vectors that spans S forms a basis for the space. We note in passing that any space that contains a continuously indexed basis, is necessarily infinite dimensional, since it must contain an infinite number of linearly independent vectors in any domain of the index α in which it takes on a continuous range of values.

14 Basis - A linearly independent set of continuously indexed vectors that spans S forms a basis for the space. We note in passing that any space that contains a continuously indexed basis, is necessarily infinite dimensional, since it must contain an infinite number of linearly independent vectors in any domain in which the index α takes on continuous values.

15 Inner Products

16 Towards a notion of length and direction

17 Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉, the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

18 Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

19 Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

20 Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

21 Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

22 Another important property associated with the linear vector spaces of quantum mechanics is that they are inner product spaces. Definition: A linear vector space S is an inner product space if there exists an assignment to each pair of vectors |φ 〉 and |ψ 〉 in S, a scalar (an element of the field), denoted by the symbol 〈 φ|ψ 〉 referred to as the inner product of |φ 〉 and |ψ 〉, obeying the following properties: 1. 〈 φ|φ 〉 is real and non-negative, i.e., 〈 φ|φ 〉 ≥0. Moreover, 〈 φ|φ 〉 =0, if and only if |φ 〉 is the null vector. 2. 〈 φ|[|ψ₁ 〉 +|ψ₂ 〉 ]= 〈 φ|ψ₁ 〉 + 〈 φ|ψ₂ 〉. Thus, the inner product distributes itself over vector addition. 3. 〈 φ|[λ|ψ 〉 ]=λ 〈 φ|ψ 〉 4. 〈 φ|ψ 〉 =( 〈 ψ|φ 〉 ) ∗. Thus the order of the inner product is important for complex vector spaces.

23 In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3),

24 In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3),

25 In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3), while the second stems from the fact that if then which defines the condition of antilinearity with respect to |φ 〉.

26 In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3), while the second stems from the fact that if then which defines the condition of antilinearity with respect to |φ 〉.

27 In complex vector spaces, the inner product 〈 φ|ψ 〉 is linear in |ψ 〉, but antilinear in |φ 〉. The first half of this comment follows from the observation that which follows from (2) and (3), while the second stems from the fact that if then which defines the condition of antilinearity with respect to |φ 〉.

28

29

30

31

32

33

34

35

36 3. In functional spaces, the inner product involves the continuous analog of a summation over components, namely an integral. Thus, e.g., in the space of Fourier transformable function on R³ we "associate" with each function ψ(r) a vector |ψ 〉. The inner product of two functions then takes the form where the integral is over all space.

37 3. In functional spaces, the inner product involves the continuous analog of a summation over components, namely an integral. Thus, e.g., in the space of Fourier transformable function on R³ we "associate" with each function ψ(r) a vector |ψ 〉. The inner product of two functions then takes the form where the integral is over all space.

38 The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

39 The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

40 The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

41 The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

42 The concept of an inner product allows us to make several new definitions: Norm - The positive real quantity is referred to as the norm, or the length of the vector A vector is said to be square-normalized, have unit norm, or be a unit vector if Any vector having a finite norm can be square normalized. That is, if is not infinite, then the vector is a unit vector along the same direction in the space as |ψ 〉

43 Orthogonality - Two vectors and |φ 〉 are orthogonal if 〈 ψ|φ 〉 = 〈 φ|ψ 〉 =0 i.e., if their inner product vanishes. We loosely say that the vectors have no overlap, or that |ψ 〉 has no component along |φ 〉 and vice versa. Orthonormal Set of vectors 1.A discrete set of vectors forms an orthonormal set if that is, if they are a set of unit-normalized vectors which are mutually orthogonal.

44 Orthogonality - Two vectors and |φ 〉 are orthogonal if 〈 ψ|φ 〉 = 〈 φ|ψ 〉 =0 i.e., if their inner product vanishes. We loosely say that the vectors have no overlap, or that |ψ 〉 has no component along |φ 〉 and vice versa. Orthonormal Set of vectors 1.A discrete set of vectors forms an orthonormal set if that is, if they are a set of unit-normalized vectors which are mutually orthogonal.

45 Orthogonality - Two vectors and |φ 〉 are orthogonal if 〈 ψ|φ 〉 = 〈 φ|ψ 〉 =0 i.e., if their inner product vanishes. We loosely say that the vectors have no overlap, or that |ψ 〉 has no component along |φ 〉 and vice versa. Orthonormal Set of vectors 1.A discrete set of vectors forms an orthonormal set if that is, if they are a set of unit-normalized vectors which are mutually orthogonal.

46 Orthogonality - Two vectors and |φ 〉 are orthogonal if 〈 ψ|φ 〉 = 〈 φ|ψ 〉 =0 i.e., if their inner product vanishes. We loosely say that the vectors have no overlap, or that |ψ 〉 has no component along |φ 〉 and vice versa. Orthonormal Set of vectors 2. A continuously-indexed set of vectors forms an orthonormal set if The members of such a set have infinite norm, and are not square-normalizable.

47

48

49

50

51

52 It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes! We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing this so referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, we can construct an orthonormal basis.

53 It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes! We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing this so referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, we can construct an orthonormal basis.

54 It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes! We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing this so referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, we can construct an orthonormal basis.

55 It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes, it does. We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing this so referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, we can construct an orthonormal basis.

56 It is straightforward to show that any set of mutually orthogonal vectors not containing the null vector is linearly independent. Thus, any orthonormal set of vectors which spans S space forms an orthonormal basis for the space. But what if we have a basis for the space that is not an orthonormal basis? It may be good thing, but does an orthonormal basis always exist? Yes, it does. We show in the next lecture that from any set of N linearly independent vectors of finite length, it is always possible to construct a set of N orthonormal vectors. The explicit algorithm for doing so is referred to as the Gram-Schmidt orthogonalization procedure, and it leads to the conclusion that from any basis, it is always possible to construct an orthonormal basis.


Download ppt "Quantum One: Lecture 8. Continuously Indexed Basis Sets."

Similar presentations


Ads by Google