Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHAPTER 3 SIGNAL SPACE ANALYSIS

Similar presentations


Presentation on theme: "CHAPTER 3 SIGNAL SPACE ANALYSIS"— Presentation transcript:

1 CHAPTER 3 SIGNAL SPACE ANALYSIS

2 INTRODUCTION – THE MODEL
We consider the following model of a generic transmission system (digital source): A message source transmits 1 symbol every T sec Symbols belong to an alphabet M (m1, m2, …mM) Binary – symbols are 0s and 1s Quaternary PCM – symbols are 00, 01, 10, 11

3 TRANSMITTER SIDE Symbol generation (message) is probabilistic, with a priori probabilities p1, p2, .. pM. or Symbols are equally likely So, probability that symbol mi will be emitted:

4 SIGNAL SPACE: OVERVIEW
What is a signal space? Vector representations of signals in an N-dimensional orthogonal space Why do we need a signal space? It is a means to convert signals to vectors and vice versa. It is a means to calculate signals energy and Euclidean distances between signals. Why are we interested in Euclidean distances between signals? For detection purposes: The received signal is transformed to a received vectors. The signal which has the minimum distance to the received signal is estimated as the transmitted signal.

5 Transmitter takes the symbol (data) mi (digital message source output) and encodes it into a distinct signal si(t). The signal si(t) occupies the whole slot T allotted to symbol mi. si(t) is a real valued energy signal (???)

6 Transmitter takes the symbol (data) mi (digital message source output) and encodes it into a distinct signal si(t). The signal si(t) occupies the whole slot T allotted to symbol mi. si(t) is a real valued energy signal (signal with finite energy)

7 CHANNEL ASSUMPTIONS Linear, wide enough to accommodate the signal si(t) with no or negligible distortion Channel noise is w(t) is a zero-mean white Gaussian noise process – AWGN additive noise received signal may be expressed as:

8 cond. error probability given ith symbol was sent
RECEIVER SIDE Observes the received signal x(t) for a duration of time T sec Makes an estimate of the transmitted signal si(t) (eq. symbol mi). Process is statistical presence of noise errors So, receiver has to be designed for minimizing the average probability of error (Pe) What is this? Pe = Symbol sent cond. error probability given ith symbol was sent

9 GEOMETRIC REPRESENTATION OF SIGNALS
Objective: To represent any set of M energy signals {si(t)} as linear combinations of N orthogonal basis functions, where N ≤ M Real value energy signals s1(t), s2(t),..sM(t), each of duration T sec Orthogonal basis function coefficient Energy signal

10 Coefficients: Real-valued basis functions:

11 The set of coefficients can be viewed as a N- dimensional vector, denoted by si
Bears a one-to-one relationship with the transmitted signal si(t)

12 a) Synthesizer for generating the signal si(t)
a) Synthesizer for generating the signal si(t). b) Analyzer for generating the set of signal vectors si.

13 So, Each signal in the set si(t) is completely determined by the vector of its coefficients

14 Finally, The signal vector si concept can be extended to 2D, 3D etc. N-dimensional Euclidian space Provides mathematical basis for the geometric representation of energy signals that is used in noise analysis Allows definition of Length of vectors (absolute value) Angles between vectors Squared value (inner product of si with itself) Matrix Transposition

15 Illustrating the geometric representation of signals for the case when N  2 and M  3. (two dimensional space, three signals)

16 Also, What is the relation between the vector representation of a signal and its energy value? …start with the definition of average energy in a signal…(5.10) Where si(t) is as in (5.5):

17 The energy of a signal is equal to the squared length of its vector
After substitution: After regrouping: Φj(t) is orthogonal, so finally we have: The energy of a signal is equal to the squared length of its vector

18 FORMULAS FOR TWO SIGNALS
Assume we have a pair of signals: si(t) and sj(t), each represented by its vector, Then: Inner product is invariant to the selection of basis functions Inner product of the signals is equal to the inner product of their vector representations [0,T]

19 EUCLIDIAN DISTANCE The Euclidean distance between two points represented by vectors (signal vectors) is equal to ||si-sk|| and the squared value is given by:

20 ANGLE BETWEEN TWO SIGNALS
The cosine of the angle Θik between two signal vectors si and sk is equal to the inner product of these two vectors, divided by the product of their norms: So the two signal vectors are orthogonal if their inner product siTsk is zero (cos Θik = 0)

21 SCHWARTZ INEQUALITY Defined as: accept without proof…

22 GRAM-SCHMIDT ORTHOGONALIZATION PROCEDURE
Assume a set of M energy signals denoted by s1(t), s2(t), .. , sM(t). Define the first basis function starting with s1 as: (where E is the energy of the signal) (based on 5.12) Then express s1(t) using the basis function and an energy related coefficient s11 as: Later using s2 define the coefficient s21 as:

23 (Look at 5.23) If we introduce the intermediate function g2 as:
We can define the second basis function φ2(t) as: Which after substitution of g2(t) using s1(t) and s2(t) it becomes: Note that φ1(t) and φ2(t) are orthogonal that means: Orthogonal to φ1(t) (Look at 5.23)

24 And so on for N dimensional space…,
In general a basis function can be defined using the following formula: where the coefficients can be defined using:

25 Special case: General case:
For the special case of i = 1 gi(t) reduces to si(t). General case: Given a function gi(t) we can define a set of basis functions, which form an orthogonal set, as:

26 CONVERSION OF THE CONTINUOUS AWGN CHANNEL INTO A VECTOR CHANNEL
Suppose that the si(t) is not any signal, but specifically the signal at the receiver side, defined in accordance with an AWGN channel: So the output of the correlator (Fig. 5.3b) can be defined as:

27 deterministic quantity
random quantity contributed by the transmitted signal si(t) sample value of the variable Wi due to noise

28 Now, Consider a random process X1(t), with x1(t), a sample function which is related to the received signal x(t) as follows: Using 5.28, and and the expansion 5.5 we get: which means that the sample function x1(t) depends only on the channel noise!

29 The received signal can be expressed as:
NOTE: This is an expansion similar to the one in 5.5 but it is random, due to the additive noise.

30 STATISTICAL CHARACTERIZATION
The received signal (output of the correlator of Fig.5.3b) is a random signal. To describe it we need to use statistical methods – mean and variance. The assumptions are: X(t) denotes a random process, a sample function of which is represented by the received signal x(t). Xj(t) denotes a random variable whose sample value is represented by the correlator output xj(t), j = 1, 2, …N. We have assumed AWGN, so the noise is Gaussian, so X(t) is a Gaussian process and being a Gaussian RV, X j is described fully by its mean value and variance.

31 MEAN VALUE Let Wj, denote a random variable, represented by its sample value wj, produced by the jth correlator in response to the Gaussian noise component w(t). So it has zero mean (by definition of the AWGN model) …then the mean of Xj depends only on sij:

32 VARIANCE Starting from the definition, we substitute using 5.29 and 5.31 Autocorrelation function of the noise process

33 After substitution for the variance we get:
It can be expressed as: (because the noise is stationary and with a constant power spectral density) After substitution for the variance we get: And since φj(t) has unit energy for the variance we finally have: Correlator outputs, denoted by Xj have variance equal to the power spectral density N0/2 of the noise process W(t).

34 PROPERTIES Xj are mutually uncorrelated
Xj are statistically independent (follows from above because Xj are Gaussian) and for a memoryless channel the following equation is true:

35 Define (construct) a vector X of N random variables, X1, X2, …XN, whose elements are independent Gaussian RV with mean values sij, (output of the correlator, deterministic part of the signal defined by the signal transmitted) and variance equal to N0/2 (output of the correlator, random part, calculated noise added by the channel). then the X1, X2, …XN , elements of X are statistically independent. So, we can express the conditional probability of X, given si(t) (correspondingly symbol mi) as a product of the conditional density functions (fx) of its individual elements fxj. NOTE: This is equal to finding an expression of the probability of a received symbol given a specific symbol was sent, assuming a memoryless channel

36 …that is: where, the vector x and the scalar xj, are sample values of the random vector X and the random variable Xj.

37 Vector x is called observation vector
Scalar xj is called observable element Vector x and scalar xj are sample values of the random vector X and the random variable Xj

38 Since, each Xj is Gaussian with mean sj and variance N0/2
we can substitute in 5.44 to get 5.46:

39 If we go back to the formulation of the received signal through a AWGN channel 5.34
Only projections of the noise onto the basis functions of the signal set {si(t)Mi=1 affect the significant statistics of the detection problem The vector that we have constructed fully defines this part

40 Finally, The AWGN channel, is equivalent to an N- dimensional vector channel, described by the observation vector

41 BIG PICTURE: DETECTION UNDER AWGN

42 ADDITIVE WHITE GAUSSIAN NOISE (AWGN)
Thermal noise is described by a zero-mean Gaussian random process, n(t) that ADDS on to the signal => “additive” Probability density function (gaussian) [w/Hz] Power spectral Density (flat => “white”) Autocorrelation Function (uncorrelated) Its PSD is flat, hence, it is called white noise. Autocorrelation is a spike at 0: uncorrelated at any non-zero lag

43 EFFECT OF NOISE IN SIGNAL SPACE
The cloud falls off exponentially (gaussian). Vector viewpoint can be used in signal space, with a random noise vector w

44 MAXIMUM LIKELIHOOD (ML) DETECTION: SCALAR CASE
“likelihoods” Assuming both symbols equally likely: uA is chosen if Log-Likelihood => A simple distance criterion!

45

46

47 CORRELATOR RECEIVER The matched filter output at the sampling time, can be realized as the correlator output. Matched filtering, i.e. convolution with si*(T-) simplifies to integration w/ si*(), i.e. correlation or inner product! Recall: correlation operation is the projection of the received signal onto the signal space! Key idea: Reject the noise (N) outside this space as irrelevant: => maximize S/N

48 IRRELEVANCE THEOREM: NOISE OUTSIDE SIGNAL SPACE
Noise PSD is flat (“white”) => total noise power infinite across the spectrum. We care only about the noise projected in the finite signal dimensions (eg: the bandwidth of interest).

49 ASIDE: CORRELATION EFFECT
Correlation is a maximum when two signals are similar in shape, and are in phase (or 'unshifted' with respect to each other). Correlation is a measure of the similarity between two signals as a function of time shift (“lag”,  ) between them When the two signals are similar in shape and unshifted with respect to each other, their product is all positive. This is like constructive interference, The breadth of the correlation function - where it has significant value - shows for how long the signals remain similar.

50 A CORRELATION RECEIVER
integrator Threshold device (A\D) - + Sample every Tb seconds integrator

51 INTEGRATE AND DUMP CORRELATION RECEIVER
White Gaussian noise Closed every Tb seconds (n(t Filter to limit noise power Threshold device (A/D) + + c R (Signal z(t High gain amplifier The bandwidth of the filter preceding the integrator is assumed to be wide enough to pass z(t) without distortion


Download ppt "CHAPTER 3 SIGNAL SPACE ANALYSIS"

Similar presentations


Ads by Google