Signal Spaces.

Slides:



Advertisements
Similar presentations
Transforming from one coordinate system to another
Advertisements

CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
Fun with Vectors. Definition A vector is a quantity that has both magnitude and direction Examples?
Lecture 15 Orthogonal Functions Fourier Series. LGA mean daily temperature time series is there a global warming signal?
Engineering Mathematics Class #15 Fourier Series, Integrals, and Transforms (Part 3) Sheng-Fang Huang.
DFT/FFT and Wavelets ● Additive Synthesis demonstration (wave addition) ● Standard Definitions ● Computing the DFT and FFT ● Sine and cosine wave multiplication.
Math Review with Matlab:
Modulated Digital Transmission
Section 9.3 The Dot Product
Noise. Noise is like a weed. Just as a weed is a plant where you do not wish it to be, noise is a signal where you do not wish it to be. The noise signal.
Chapter 4.1 Mathematical Concepts
Chapter 5 Orthogonality
Chapter 4.1 Mathematical Concepts. 2 Applied Trigonometry Trigonometric functions Defined using right triangle  x y h.
CSCE 590E Spring 2007 Basic Math By Jijun Tang. Applied Trigonometry Trigonometric functions  Defined using right triangle  x y h.
Vectors Sections 6.6.
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
Deconstructing periodic driving voltages (or any functions) into their sinusoidal components: It's easy to build a periodic functions by choosing coefficients.
Orthogonality and Least Squares
Laws of Sines and Cosines
Linear Algebra, Principal Component Analysis and their Chemometrics Applications.
VECTORS AND THE GEOMETRY OF SPACE 12. VECTORS AND THE GEOMETRY OF SPACE So far, we have added two vectors and multiplied a vector by a scalar.
Chapter 3. Vector 1. Adding Vectors Geometrically
Vectors Vector: a quantity that has both magnitude (size) and direction Examples: displacement, velocity, acceleration Scalar: a quantity that has no.
Vector. Scaler versus Vector Scaler ( 向量 ): : described by magnitude  E.g. length, mass, time, speed, etc Vector( 矢量 ): described by both magnitude and.
Chapter 6 Vectors 6.1 General Information and Geometric uses A vector is a quantity that has both magnitude and direction. tip tail Two vectors are equivalent.
Chapter 3 Vectors.
Mathematical Fundamentals
Phys211C1V p1 Vectors Scalars: a physical quantity described by a single number Vector: a physical quantity which has a magnitude (size) and direction.
UNIVERSITI MALAYSIA PERLIS
Chapter 4.1 Mathematical Concepts
Chapter 6 ADDITIONAL TOPICS IN TRIGONOMETRY. 6.1 Law of Sines Objectives –Use the Law of Sines to solve oblique triangles –Use the Law of Sines to solve,
ME 2304: 3D Geometry & Vector Calculus
Vectors and the Geometry of Space
Copyright © Cengage Learning. All rights reserved. 12 Vectors and the Geometry of Space.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Introduction and Vectors
Adding Vectors, Rules When two vectors are added, the sum is independent of the order of the addition. This is the Commutative Law of Addition.
Scalars A scalar is any physical quantity that can be completely characterized by its magnitude (by a number value) A scalar is any physical quantity that.
H.Melikyan/12001 Vectors Dr.Hayk Melikyan Departmen of Mathematics and CS
Vectors and the Geometry of Space 9. Vectors 9.2.
Vectors Chapter 3, Sections 1 and 2. Vectors and Scalars Measured quantities can be of two types Scalar quantities: only require magnitude (and proper.
Slide Copyright © 2006 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
1 Chapter 5 – Orthogonality and Least Squares Outline 5.1 Orthonormal Bases and Orthogonal Projections 5.2 Gram-Schmidt Process and QR Factorization 5.3.
HWQ Find the xy trace:.
Engineering Mechanics: Statics Chapter 2: Force Vectors Chapter 2: Force Vectors.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Periodic driving forces
Presented by: S. K. Pandey PGT Physics K. V. Khandwa Kinematics Vectors.
Section 5.1 Length and Dot Product in ℝ n. Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The dot product.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Chapter 3 Vectors. Vector quantities  Physical quantities that have both numerical and directional properties Mathematical operations of vectors in this.
Vectors Vector: a quantity that has both magnitude (size) and direction Examples: displacement, velocity, acceleration Scalar: a quantity that has no.
CSCE 552 Fall 2012 Math By Jijun Tang. Applied Trigonometry Trigonometric functions  Defined using right triangle  x y h.
Basic Theory (for curve 01). 1.1 Points and Vectors  Real life methods for constructing curves and surfaces often start with points and vectors, which.
Copyright © Cengage Learning. All rights reserved. 12 Vectors and the Geometry of Space.
Vectors and the Geometry of Space Section 10.4 Lines and Planes in Space 2015.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Eigenfunctions Fourier Series of CT Signals Trigonometric Fourier Series Dirichlet.
Chapter 3 Lecture 5: Vectors HW1 (problems): 1.18, 1.27, 2.11, 2.17, 2.21, 2.35, 2.51, 2.67 Due Thursday, Feb. 11.
Are the quantities that has magnitude only only  Length  Area  Volume  Time  Mass Are quantities that has both magnitude and a direction in space.
Computer Graphics Mathematical Fundamentals Lecture 10 Taqdees A. Siddiqi
Chapter 2. READING ASSIGNMENTS This Lecture: Chapter 2, pp Appendix A: Complex Numbers Appendix B: MATLAB or Labview Chapter 1: Introduction.
Chapter 4.1 Mathematical Concepts. 2 Applied Trigonometry "Old Henry And His Old Aunt" Defined using right triangle  x y h.
Math for CS Fourier Transforms
Lecture 11 Inner Product Spaces Last Time Change of Basis (Cont.) Length and Dot Product in R n Inner Product Spaces Elementary Linear Algebra R. Larsen.
An inner product on a vector space V is a function that, to each pair of vectors u and v in V, associates a real number and satisfies the following.
Trigonometric Identities
Lecture 03: Linear Algebra
Trigonometric Identities
12.3 The Dot Product.
Presentation transcript:

Signal Spaces

A vector is a quantity with magnitude and direction. Much can be done with signals by drawing an analogy between signals and vectors. A vector is a quantity with magnitude and direction. magnitude (NE Direction) q

There are two fundamental operations associated with vectors: scalar multiplication and vector addition. Scalar multiplication is scaling the magnitude of a vector by a value. a 2a

Vector addition is accomplished by placing the tail of one vector to the tip of another vector. a+b

A vector can be described by a magnitude and an angle, or it can be described in terms of coordinates. Rather than use x-y coordinates we can describe the coordinates using unit vectors. The unit vector for the “x” coordinate is i. The unit vector for the “y” coordinate is j.

Thus we can describe vector a as (4,3) a j i We can also describe a as 4i + 3j.

Suppose we had a second vector b = 4i + j. The sum of the vectors a and b could be described easily in terms of unit vectors: a + b = 8i + 4j.

a + b = (ax+bx )i + (ay+by )j . In general, if a = ax i + ay j and b = bx i + by j , we have a + b = (ax+bx )i + (ay+by )j . In other words, the x-component of the sum is the sum of the x-components of the terms, and the y-component of the sum is the sum of the y-components of the terms.

At this point we draw an analogy from vectors to signals. Let a(t) and b(t) be sampled functions a(t) b(t)

When we add two functions together, we add their respective samples together as we would add the x-components, y-components and other components together.

a(t) b(t) a(t) + b(t)

We can think of the different sample times as different dimensions. In MATLAB, we could create two vectors (one-dimensional matrices), and add them together: >> a = [3 4 1 2]; >> b = [2 3 4 2]; >> a + b ans = 5 7 5 4

You can think of the four values in each vector as, say, w-components, x-components, y-components and z-components. We can add additional components as well.

We will now examine another vector operation and show an analogous operation to signals. This operation is the dot product. Given two vectors, a and b, the dot product of the two vectors is defined to be the product of their magnitudes times the cosine of the angle between them: a•b  |a| |b| cosqab.

If the two vectors are in the same direction, the dot product is merely the ordinary product of the magnitudes. b a a•b = |a| |b|. If the two vectors are perpendicular, then the dot product is zero. a•b = 0. a b

The dot product of the unit vector i with itself is one The dot product of the unit vector i with itself is one. So is the dot product of the unit vector j with itself. i•i = 1. j•j = 1. The dot product of the unit vector i the unit vector j is zero. i•j = 0.

a • b = (ax i + ay j )• (bxi + by j ) . Suppose a = ax i + ay j and b = bx i + by j , Their dot product is a • b = (ax i + ay j )• (bxi + by j ) . Using the dot products of unit vectors from the previous slide, we have a • b = axbx + ayby .

As with vector addition, we can draw an analogy for the dot product to signals. Let a(t) and b(t) be sampled functions (as before). We define the inner product of the two signals to be the sum of the products of the samples from a(t) and b(t). The notation for the inner product between two signals a(t) and b(t) is

The inner product is a generalization of the dot product. If we had, say, four sample times, t1, t2, t3, t4, the inner product would be Let us take the inner product of our previous sampled signals a(t) and b(t):

a(t) 4 3 2 1 b(t) 4 3 2 2

In MATLAB, we would take the inner product as follows: ans = 26

In general, the inner product of a pair of sampled signals would be Now, what happens as the time between samples decreases and the number of samples increases? Eventually, we approach the inner product of a pair of continuous signals.

Again, the inner product can be thought of as the sum of products of two signals.

Example: Find the inner product of the following two functions:

Solution:

Example: Find the inner product of the following two functions:

When the inner product of two signals is equal to zero, we say that the two signals are orthogonal.

When two vectors are perpendicular, their dot product is zero When two vectors are perpendicular, their dot product is zero. When two signals are orthogonal, their inner product is zero. Just as the inner product is a generalization of the dot product, we generalize the idea of two vectors being perpendicular if their dot product is zero to the idea of two signals being orthogonal if their inner product is zero.

Example: Find the inner product of the following two functions: Let T be an integral multiple of periods of sin wct or cos wct.

The functions sine and cosine are orthogonal to each other.

Example: Find the inner product of with itself. Again, let T be an integral multiple of periods of sin wct or cos wct.

The inner product of a signal with itself is equal to its energy. The dot product of a signal with itself is equal to its magnitude-squared (exercise).

Exercise: Find the inner product of a(t) with itself, b(t) with itself and a(t) with b(t), where As before, let T be an integral multiple of periods of sin wct or cos wct.

Now, back to ordinary vectors Now, back to ordinary vectors. One of the most famous theorems in vectors is something called the Cauchy-Schwarz inequality. It shows how dot products of two vectors compare with their magnitudes. It also applies to inner products. Let us introduce a scalar g. Using this scalar along with our two vectors a and b, let us take the inner product of a+ gb with itself.

(We have exploited some properties of the inner product which should not be too hard to verify, namely distributivity and scalar multiplication.) The expression on the right-hand side of this equation is a quadratic in g. If we were to graph this expression versus g, we would get a parabola. The graph would take one of the following three forms:

g g g Two Roots One Root No (Real) Roots

We know, however, that since this expression is equal to the inner product of something with itself <a+ gb a+ gb>, that the expression must be greater than or equal to zero. Thus only the last two graphs pertain to this expression. If this is true, then the quadratic expression must have at most one root.

If there is at most one root, then the discriminant of the quadratic must be negative or zero: Simplifying, we have Thus we have the statement of the Cauchy-Schwarz inequality.

This expression is a non-strict inequality This expression is a non-strict inequality. In some cases, we have equality. Suppose a and b are orthogonal (qab = 90°). In this case, the Cauchy-Schwarz inequality is met easily (zero is less than or equal to anything positive).

Suppose a and b are in the same direction (qab = 0°). In this case, the Cauchy-Schwarz inequality is an equality: the upper-bound on <a,b> is met. Thus, the maximum value of <a,b> is achieved when a and b are collinear (in the same direction).

The Cauchy-Schwarz inequality as an upper bound on <a,b> is the basis for digital demodulation. If we wished to detect a signal a by taking its inner product with some signal b, the optimal value of b is some scalar multiple of a. <a,b> a Detector

If we use the inner product of signals, the inner product detector becomes X a(t) <a,b> b(t)

So the optimal digital detector is simply an application of the Cauchy-Schwarz inequality. The optimal “local oscillator” signal b(t) is simply whatever signal that we wish to detect. Using our previous notation a(t) is equal to s(t) if there is no noise, or r(t)=s(t)+n(t) if there is noise. The “local oscillator signal” b(t) is simply s(t) [we do not wish to detect the noise component].

X r(t) s s(t) The resultant filter is called a matched filter. We “match” the signal that we wish to detect s(t) with a “local oscillator” signal s(t).

Another way to think of the inner product operation or matched filter operation is as a vector projection. Suppose we have two vectors a and b. a b

The projection of a onto b is the “shadow” a casts on b from “sunlight” directly above.

The magnitude of the projection is magnitude of a times the cosine of the angle between a and b. The projection can be defined in terms of the inner product:

The actual projection itself is a vector in the direction of b The actual projection itself is a vector in the direction of b. To form this vector, we multiply the magnitude by a unit vector in the direction of b.

The denominator |b||b| can be expressed as the magnitude squared of b or the inner product of b with itself. If the magnitude of b is unity, the projection becomes

X The signal b has unity magnitude in the following matched filter: ds(t) X s(t) s (2/T) cos wct This was the detector with the “normalized” local oscillator.

Let us do an example of projections with signals. Example: “Project” t2 onto t. Restrict the interval of the signals to [0,1].

Evaluating the integrals, we have Let us plot the original function t2 and its projection onto t.

The projection (3/4)t can be thought of as the best linear approximation to t2 over the interval [0,1]. (It is the best linear approximation in the minimum, mean-squared error sense.) When we project a vector onto another vector, we take the best approximation of the first vector in the direction of the second vector. a b

We can also project onto multiple vectors: b

If we were to add the two projections (onto b and c), we would no longer have an approximation to a, but rather we would have exactly a.

Example: Let us project cos(wct + q) onto cos(wct) and sin(wct) .

Evaluating the integrals, we have

Adding the two projections, we have This should not be surprising because

Example: Let us project an arbitrary function x(t) onto cos(wct), cos(2wct), cos(3wct), … Restrict the interval of the signals to [0,T], where T is an integral multiple of cycles of cos(wct). We will be projecting x(t) onto cos(nwct), for n=1,2,3, … These projections will then be summed.

So the summation of the projections is Or, where

We have just derived the trigonometric Fourier series (for cosine).

Exercise: Project an arbitrary function x(t) onto sin(nwct), n=1,2,3, … (Complete the trigonometric Fourier series.)

Exercise: Project an arbitrary function x(t) onto ejnwct, for n=0,±1,±2, … (Derive the complex exponential Fourier series.)

The previous examples an exercises worked because we were projecting onto orthogonal signals. If the signals were not orthogonal, we could not simply sum the projections and expect to reconstruct the original signal.