Quantum One.

Slides:



Advertisements
Similar presentations
Puzzle Twin primes are two prime numbers whose difference is two.
Advertisements

Quantum One: Lecture 6. The Initial Value Problem for Free Particles, and the Emergence of Fourier Transforms.
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
Quantum One: Lecture 9. Graham Schmidt Orthogonalization.
Quantum One: Lecture Ket-Bra Operators, Projection Operators, and Completeness Relations 3.
Quantum One: Lecture 5a. Normalization Conditions for Free Particle Eigenstates.
Quantum One: Lecture 17.
Quantum One: Lecture Postulate II 3 Observables of Quantum Mechanical Systems 4.
Quantum One: Lecture 16.
Quantum One: Lecture 3. Implications of Schrödinger's Wave Mechanics for Conservative Systems.
Quantum One: Lecture Canonical Commutation Relations 3.
Signal , Weight Vector Spaces and Linear Transformations
Signal , Weight Vector Spaces and Linear Transformations
New chapter – quantum and stat approach We will start with a quick tour over QM basics in order to refresh our memory. Wave function, as we all know, contains.
Quantum One: Lecture 7. The General Formalism of Quantum Mechanics.
Quantum One: Lecture 8. Continuously Indexed Basis Sets.
राघव वर्मा Inner Product Spaces Physical properties of vectors  aka length and angles in case of arrows Lets use the dot product Length of Cosine of the.
Quantum Mechanics(14/2)Taehwang Son Functions as vectors  In order to deal with in more complex problems, we need to introduce linear algebra. Wave function.
Chapter 5: The Orthogonality and Least Squares
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
3. Hilbert Space and Vector Spaces
Quantum One: Lecture Representation Independent Properties of Linear Operators 3.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Formalism of Quantum Mechanics 2006 Quantum MechanicsProf. Y. F. Chen Formalism of Quantum Mechanics.
Quantum Two 1. 2 More About Direct Product Spaces 3.
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
Quantum Two 1. 2 Motion in 3D as a Direct Product 3.
Quantum Two 1. 2 Many Particle Systems: An Introduction to Direct Product Spaces 3.
Mathematical Tools of Quantum Mechanics
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
EE611 Deterministic Systems Vector Spaces and Basis Changes Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Systems of Identical Particles
Postulates of Quantum Mechanics
Copyright © Cengage Learning. All rights reserved.
Chapter 3 Formalism.
Matrices and Vectors Review Objective
Lecture 03: Linear Algebra
Quantum One.
Quantum One.
Quantum One.
Quantum One.
Quantum One.
Quantum Two.
4. The Postulates of Quantum Mechanics 4A. Revisiting Representations
Quantum One.
Quantum One.
Lecture on Linear Algebra
Quantum One.
Quantum One.
Quantum One.
Quantum One.
Quantum One.
Signal & Weight Vector Spaces
Chapter 3 Linear Algebra
Linear Algebra Lecture 39.
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Signal & Weight Vector Spaces
Quantum Two.
Linear Vector Space and Matrix Mechanics
Linear Algebra Lecture 41.
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Linear Vector Space and Matrix Mechanics
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Quantum One.
Presentation transcript:

Quantum One

Graham Schmidt Orthogonalization

In the last lecture, we extended out definitions of spanning sets, linearly independent sets, and basis sets to allow an application of these concepts to continuously indexed sets of vectors. We then introduced the idea of an inner product, which extends to complex vectors spaces the familiar dot product encountered in real vector spaces. This allowed us to define the norm or length of a vector, to define unit vectors, and to introduce a limited notion of direction through the concept of orthogonality. These notions of length, and orthogonality, allowed us to define orthonormal sets of vectors, with either discrete or continuous indices, and to end up with the idea of an orthonormal basis, i.e., an orthonormal set of vectors that span the space.

In this lecture we begin by showing that it is always possible to construct an orthonormal basis set from any set of basis vectors of finite length. The explicit algorithm for doing so, referred to as the Gram-Schmidt orthogonalization procedure, is presented below.

Let be a set of linearly independent vectors of finite length. This produces a unit vector |φ₁〉 pointing along the same direction as |χ₁〉. Now construct a second vector orthogonal to the first, by subtracting off that part of it which lies along the direction of the first vector: 2. Set Note, that by construction so |ψ₂〉 and thus |φ₂〉 are orthogonal to |φ₁〉.

Let be a set of linearly independent vectors of finite length. This produces a unit vector |φ₁〉 pointing along the same direction as |χ₁〉. Now construct a second vector orthogonal to the first, by subtracting off that part of it which lies along the direction of the first vector: 2. Set Note, that by construction so |ψ₂〉 and thus |φ₂〉 are orthogonal to |φ₁〉.

Let be a set of linearly independent vectors of finite length. This produces a unit vector |φ₁〉 pointing along the same direction as |χ₁〉. Now construct a second vector orthogonal to the first, by subtracting off that part of it which lies along the direction of the first vector: 2. Set Note, that by construction so |ψ₂〉 and thus |φ₂〉 are orthogonal to |φ₁〉.

Let be a set of linearly independent vectors of finite length. This produces a unit vector |φ₁〉 pointing along the same direction as |χ₁〉. Now construct a second vector orthogonal to the first, by subtracting off that part of it which lies along the direction of the first vector: 2. Set Note, that by construction so |ψ₂〉 and thus |φ₂〉 are orthogonal to |φ₁〉.

Let be a set of linearly independent vectors of finite length. This produces a unit vector |φ₁〉 pointing along the same direction as |χ₁〉. Now construct a second vector orthogonal to the first, by subtracting off that part of it which lies along the direction of the first vector: 2. Set Note, that by construction so |ψ₂〉 and thus |φ₂〉 are orthogonal to |φ₁〉.

Let be a set of linearly independent vectors of finite length. This produces a unit vector |φ₁〉 pointing along the same direction as |χ₁〉. Now construct a second vector orthogonal to the first, by subtracting off that part of it which lies along the direction of the first vector: 2. Set Note, that by construction so |ψ₂〉 and thus |φ₂〉 are orthogonal to |φ₁〉.

Let be a set of linearly independent vectors of finite length. This produces a unit vector |φ₁〉 pointing along the same direction as |χ₁〉. Now construct a second vector orthogonal to the first, by subtracting off that part of it which lies along the direction of the first vector: 2. Set Note, that by construction so |ψ₂〉 and thus |φ₂〉 are orthogonal to |φ₁〉.

We now proceed in this fashion, constructing each new vector orthogonal to each of those previously constructed. Thus, 3. Set and, at the nth step Set so that

We now proceed in this fashion, constructing each new vector orthogonal to each of those previously constructed. That is we 3. Set and, at the nth step Set so that

We now proceed in this fashion, constructing each new vector orthogonal to each of those previously constructed. Thus, 3. Set and, at the nth step Set so that

We now proceed in this fashion, constructing each new vector orthogonal to each of those previously constructed. Thus, 3. Set and, at the nth step Set so that

We now proceed in this fashion, constructing each new vector orthogonal to each of those previously constructed. Thus, 3. Set and, at the nth step Set so that

We now proceed in this fashion, constructing each new vector orthogonal to each of those previously constructed. Thus, 3. Set and, at the nth step Set so that

The only way this process could stop is if one of the resulting vectors turned out to be the null vector. A close inspection of the process reveals that this can't happen if the original set is linearly independent, as we have assumed. Thus, in this way we can construct an orthonormal set of vectors equal in number to those of the original set. It follows, that given any basis for the space we can construct an orthonormal basis with an equal number of vectors. In the next lecture, we figure out why that’s a very good thing.

The only way this process could stop is if one of the resulting vectors turned out to be the null vector. A close inspection of the process reveals that this can't happen if the original set is linearly independent, as we have assumed. Thus, in this way we can construct an orthonormal set of vectors equal in number to those of the original set. It follows, that given any basis for the space we can construct an orthonormal basis with an equal number of vectors. In the next lecture, we figure out why that’s a very good thing.

The only way this process could stop is if one of the resulting vectors turned out to be the null vector. A close inspection of the process reveals that this can't happen if the original set is linearly independent, as we have assumed. Thus, in this way we can construct an orthonormal set of vectors equal in number to those of the original set. It follows, that given any basis for the space we can construct an orthonormal basis with an equal number of vectors. In the next lecture, we figure out why that’s a very good thing.

The only way this process could stop is if one of the resulting vectors turned out to be the null vector. A close inspection of the process reveals that this can't happen if the original set is linearly independent, as we have assumed. Thus, in this way we can construct an orthonormal set of vectors equal in number to those of the original set. It follows, that given any basis for the space we can construct an orthonormal basis with an equal number of vectors. We now explore how orthonormal bases make our lives easier.

Expansion of a Vector on an Orthonormal Basis

Discrete Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for a unique set of expansion coefficients . Q: How do we determine what these expansion coefficients are?

Discrete Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for a unique set of expansion coefficients . Q: How do we determine what these expansion coefficients are?

Discrete Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for a unique set of expansion coefficients . Q: How do we determine what these expansion coefficients are?

Discrete Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for a unique set of expansion coefficients . Question: How do we determine what these expansion coefficients are?

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, We can then write the expansion as

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, We can then write the expansion as

Extension to Continuous Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for some unique expansion function . Q: How do we determine what this expansion function is?

Extension to Continuous Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for some unique expansion function . Q: How do we determine what this expansion function is?

Extension to Continuous Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for some unique expansion function . Q: How do we determine what this expansion function is?

Extension to Continuous Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for some unique expansion function . Q: How do we determine what this expansion function is?

Extension to Continuous Bases - Let the set form an orthonormal basis (or ONB) for the space S, so that and let |χ〉 be an arbitrary element of the space. By assumption there exists an expansion of the form for some unique expansion function . Question: How do we determine what this expansion function is?

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus, expansion coefficient = inner product with basis vector

Consider the inner product of the vector |χ〉 with an arbitrary element of this basis: This shows that the expansion coefficient is just the inner product of the vector we are expanding with the unit vector along that direction in Hilbert space. Thus,

So: where We will refer to the function χ(α) as the "wave function" representing |χ〉 in the α representation. Note: This expansion can also be written in the suggestive the form

So: where We will refer to the function χ(α) as the "wave function" representing |χ〉 in the α representation. Note: This expansion can also be written in the suggestive the form

So: where We will refer to the function χ(α) as the "wave function" representing |χ〉 in the α representation. Note: This expansion can also be written in the suggestive form

So: where We will refer to the function χ(α) as the "wave function" representing |χ〉 in the α representation. Note: This expansion can also be written in the suggestive form

A Notational Simplification: It is clear that when we talk about ONB's, such as 〉} or , the important information which distinguishes the different basis vectors from one another is the label or index: i or j in the discrete case, α or α′ in the continuous case. The symbols φ just sort of come along for the ride, a historical vestige of when we were expanded in sets of functions. From this point on we will acknowledge this by using an abbreviated notation: and In this way the expansions of an arbitrary ket can be written

A Notational Simplification: It is clear that when we talk about ONB's, such as 〉} or , the important information which distinguishes the different basis vectors from one another is the label or index: i or j in the discrete case, α or α′ in the continuous case. The symbols φ just sort of come along for the ride, a historical vestige of when we were expanded in sets of functions. From this point on we will acknowledge this by using an abbreviated notation: and In this way the expansions of an arbitrary ket can be written

A Notational Simplification: It is clear that when we talk about ONB's, such as 〉} or , the important information which distinguishes the different basis vectors from one another is the label or index: i or j in the discrete case, α or α′ in the continuous case. The symbols φ just sort of come along for the ride, a historical vestige of when we were expanding in sets of functions. From this point on we will acknowledge this by using an abbreviated notation: and In this way the expansions of an arbitrary ket can be written

A Notational Simplification: It is clear that when we talk about ONB's, such as 〉} or , the important information which distinguishes the different basis vectors from one another is the label or index: i or j in the discrete case, α or α′ in the continuous case. The symbols φ just sort of come along for the ride, a historical vestige of when we were expanding in sets of functions. From this point on we will acknowledge this by using an abbreviated notation: and In this way the expansions of an arbitrary ket can be written

A Notational Simplification: It is clear that when we talk about ONB's, such as 〉} or , the important information which distinguishes the different basis vectors from one another is the label or index: i or j in the discrete case, α or α′ in the continuous case. The symbols φ just sort of come along for the ride, a historical vestige of when we were expanding in sets of functions. From this point on we will acknowledge this by using an abbreviated notation: and In this way the expansions of an arbitrary ket can be written

A Notational Simplification: It is clear that when we talk about ONB's, such as 〉} or , the important information which distinguishes the different basis vectors from one another is the label or index: i or j in the discrete case, α or α′ in the continuous case. The symbols φ just sort of come along for the ride, a historical vestige of when we were expanded in sets of functions. From this point on we will acknowledge this by using an abbreviated notation: and In this way the expansions of an arbitrary ket can be written

A Notational Simplification: It is clear that when we talk about ONB's, such as 〉} or , the important information which distinguishes the different basis vectors from one another is the label or index: i or j in the discrete case, α or α′ in the continuous case. The symbols φ just sort of come along for the ride, a historical vestige of when we were expanding in sets of functions. From this point on we will acknowledge this by using an abbreviated notation: and In this way the expansions of an arbitrary ket can be written

Calculation of Inner Products Using an Orthonormal Basis

The Emergence of Numerical Representations

Discrete Bases - Let the set form an orthonormal basis for S, so that and let |χ〉 and |ψ〉 be arbitrary states of S. These states can be expanded in this basis set Suppose we know these expansion coefficients, and we want to know the inner product of these two vectors.

Discrete Bases - Let the set form an orthonormal basis for S, so that and let |χ〉 and |ψ〉 be arbitrary states of S. These states can be expanded in this basis set Suppose we know these expansion coefficients, and we want to know the inner product of these two vectors.

Discrete Bases - Let the set form an orthonormal basis for S, so that and let |χ〉 and |ψ〉 be arbitrary states of S. These states can be expanded in this basis set Suppose we know these expansion coefficients, and we want to know the inner product of these two vectors.

Discrete Bases - Let the set form an orthonormal basis for S, so that and let |χ〉 and |ψ〉 be arbitrary states of S. These states can be expanded in this basis set Suppose we know these expansion coefficients, and we want to know the inner product of these two vectors.

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in in CN

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in in CN

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in in CN

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in in CN

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in in CN

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in CN

Thus we have an important result: Any discrete orthonormal basis {|i〉 } generates a column-vector/ row-vector representation, i.e., it gives us a natural way of associating each ket |ψ〉 with a complex-valued column vector with components ψ 𝑖 , and each bra 〈ψ| in S∗ with a complex-valued row vector with components ψ 𝑖 ∗ , in terms of which the inner product of two states can be written It is important to note that in our formulation, the quantum state vector |ψ〉 is not a column vector, but it may in this way be represented by or associated with one. In fact, this may be done in an infinite number ways.

Thus we have an important result: Any discrete orthonormal basis {|i〉 } generates a column-vector/ row-vector representation, i.e., it gives us a natural way of associating each ket |ψ〉 with a complex-valued column vector with components ψ 𝑖 , and each bra 〈ψ| in S∗ with a complex-valued row vector with components ψ 𝑖 ∗ , in terms of which the inner product of two states can be written It is important to note that in our formulation, the quantum state vector |ψ〉 is not a column vector, but it may in this way be represented by or associated with one. In fact, this may be done in an infinite number ways.

Thus we have an important result: Any discrete orthonormal basis {|i〉 } generates a column-vector/ row-vector representation, i.e., it gives us a natural way of associating each ket |ψ〉 with a complex-valued column vector with components ψ 𝑖 , and each bra 〈ψ| in S∗ with a complex-valued row vector with components ψ 𝑖 ∗ , in terms of which the inner product of two states can be written It is important to note that in our formulation, the quantum state vector |ψ〉 is not a column vector, but it may in this way be represented by or associated with one. In fact, this may be done in an infinite number ways.

Continuous Bases - Let the set form a continuous orthonormal basis for S, so that and let |χ〉 and |ψ〉 be arbitrary states of S. These states can be expanded in this basis Suppose we know these expansion functions, and we want to know the inner product of these two vectors.

Continuous Bases - Let the set form a continuous orthonormal basis for S, so that and let |χ〉 and |ψ〉 be arbitrary states of S. These states can be expanded in this basis Suppose we know these expansion functions, and we want to know the inner product of these two vectors.

Continuous Bases - Let the set form a continuous orthonormal basis for S, so that and let |χ〉 and |ψ〉 be arbitrary states of S. These states can be expanded in this basis Suppose we know these expansion functions, and we want to know the inner product of these two vectors.

Continuous Bases - Let the set form a continuous orthonormal basis for S, so that and let |χ〉 and |ψ〉 be arbitrary states of S. These states can be expanded in this basis Suppose we know these expansion functions, and we want to know the inner product of these two vectors.

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in functional linear vector spaces

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in functional linear vector spaces

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in functional linear vector spaces

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in functional linear vector spaces

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in functional linear vector spaces

Well, we can express the desired inner product in the form But So we can write this inner product in the form But this is exactly of the form of the inner product in functional linear vector spaces

Thus we have an important result: Any continuous orthonormal basis {|α 〉} induces a wave function representation for the space, i.e., it gives us a natural way of associating each ket |ψ〉 in S with a complex-valued wave function ψ(α) and each bra 〈ψ| in S∗ with a complex-valued wave function ψ∗ (α). We refer to ψ(α) as the wave function for the state |ψ〉 in the α-representation. In this representation the inner product of two states can be computed as It is important to note that in our formulation, the quantum state vector |ψ〉 is not a wave function but it may in this way be represented by or associated with one. In fact, this may be done in an infinite number ways.

Thus we have an important result: Any continuous orthonormal basis {|α 〉} induces a wave function representation for the space, i.e., it gives us a natural way of associating each ket |ψ〉 in S with a complex-valued wave function ψ(α) and each bra 〈ψ| in S∗ with a complex-valued wave function ψ∗ (α). We refer to ψ(α) as the wave function for the state |ψ〉 in the α-representation. In this representation the inner product of two states can be computed as It is important to note that in our formulation, the quantum state vector |ψ〉 is not a wave function but it may in this way be represented by or associated with one. In fact, this may be done in an infinite number ways.

Thus we have an important result: Any continuous orthonormal basis {|α 〉} induces a wave function representation for the space, i.e., it gives us a natural way of associating each ket |ψ〉 in S with a complex-valued wave function ψ(α) and each bra 〈ψ| in S∗ with a complex-valued wave function ψ∗ (α). We refer to ψ(α) as the wave function for the state |ψ〉 in the α-representation. In this representation the inner product of two states can be computed as It is important to note that in our formulation, the quantum state vector |ψ〉 is not a wave function but it may in this way be represented by or associated with one. In fact, this may be done in an infinite number ways.

Thus we have an important result: Any continuous orthonormal basis {|α 〉} induces a wave function representation for the space, i.e., it gives us a natural way of associating each ket |ψ〉 in S with a complex-valued wave function ψ(α) and each bra 〈ψ| in S∗ with a complex-valued wave function ψ∗ (α). We refer to ψ(α) as the wave function for the state |ψ〉 in the α-representation. In this representation the inner product of two states can be computed as It is important to note that in our formulation, the quantum state vector |ψ〉 is not a wave function but it may in this way be represented by or associated with one. In fact, this may be done in an infinite number ways.

In the next lecture we attempt to make some of these admittedly abstract ideas more concrete, by applying our general formulation of quantum mechanics to the quantum state space of a single quantum mechanical particle. In so doing, we will see how, in a natural and physically meaningful way the “Schrödinger” representation, which associates the quantum state with a wave function , emerges from the formal mathematical structure developed thus far.

In the next lecture we attempt to make some of these admittedly abstract ideas more concrete, by applying our general formulation of quantum mechanics to the quantum state space of a single quantum mechanical particle. In so doing, we will see how, in a natural and physically meaningful way the “Schrödinger” representation, which associates the quantum state with a wave function , emerges from the formal mathematical structure developed thus far.

In the next lecture we attempt to make some of these admittedly abstract ideas more concrete, by applying our general formulation of quantum mechanics to the quantum state space of a single quantum mechanical particle. In so doing, we will see how, in a natural and physically meaningful way the “Schrödinger” representation, which associates the quantum state with a wave function , emerges from the formal mathematical structure developed thus far.

In the next lecture we attempt to make some of these admittedly abstract ideas more concrete, by applying our general formulation of quantum mechanics to the quantum state space of a single quantum mechanical particle. In so doing, we will see how, in a natural and physically meaningful way the “Schrödinger” representation, which associates the quantum state with a wave function , emerges from the formal mathematical structure developed thus far.