# Modulated Digital Transmission

## Presentation on theme: "Modulated Digital Transmission"— Presentation transcript:

Modulated Digital Transmission

Digital modulation is the process of using digital information to alter or modulate the amplitude, phase or frequency of a sinewave. As an example, suppose we wished to use digital information to modulate the amplitude of a sinewave. The result is called On-Off Keying (OOK) and is shown on the following page.

1 1 1 1

To generate an OOK signal, we simply multiply the digital (baseband) signal by the unmodulated carrier. X

In a variation of OOK, we multiply the sinewave by a bipolar or antipodal version of the digital waveform: 1 1 1 1 + + + + 0 volts - - - -

The resultant modulated waveform is actually a form of phase-modulation called BPSK.
X

BPSK has two phases: 0° and 180° corresponding to logic one and logic zero respectively.

Just as we can have two phases with BPSK, we can have four phases with QPSK.
90° 180° 270°

Since we have four phases, we cannot simply assign these phases to logic one and logic zero. Instead, we assign each of the four phases to pairs of bits. 01 11 00 10

Thus, we modulate the following digital waveform using QPSK:
1 1 1 1

The digital modulation processes are fairly simple: simply multiply the bits by sinewaves (or cosinewaves). In the case of QPSK, we multiply the odd bits by sinewaves and the even bits by cosinewaves and add them together. The demodulation processes are very similar to that of DSB-SC AM. In DSB-SC AM, we multiplied the modulated carrier by a sinewave (at the carrier frequency) and then low-pass filtered the product.

LPF X xc(t) cos ct The digital demodulation process is very similar, except that, instead of a low-pass filter, we use an integrator.

X digitally-modulated signal
We must also interpret the output of the integrator appropriately. The value of the output will determine if input signal corresponds to a one or a zero.

Let s(t) be the modulated signal.
For OOK, we have The digital demodulator would look like the following

X ds(t) s(t) s cos ct The product s(t) cos ct is denoted by ds(t)
We denote the output of the integrator by s.

Now, if what would dn(t) and s be?

Now the integral is taken over a period T corresponding to the time to transmit a single bit.
The bit period T is also an integral multiple of periods of c. T

If this is true, we have

So,

Now, we need to interpret this integrator output
Now, we need to interpret this integrator output. To do this interpretation, all we need to do is look at what the demodulator does: it converts a modulated sinewave to one of two values: {T/2, 0}. The resultant output is just like a baseband digital signal. To interpret this digital signal, we simply set a threshold (typically at the halfway point): anything above this threshold is considered to be a logic one, and anything below this threshold is considered to be a logic zero.

So, for OOK output s, we perform the following comparison:

We can use the same demodulator for BPSK:
ds(t) X s(t) s cos ct where

The product and the output of the integrator become

Or,

The interpretation or comparison of s for BPSK becomes

We now have detection thresholds for the outputs of digital demodulators.
The next step is to determine the bit error-rates (BER’s) for these demodulators/detectors. We can determine the bit error-rates in much the same way that we determined the bit error-rates for baseband digital transmission, reception and detection.

For baseband digital transmission, we took the bit error-rate to be
where d is the distance between either logic level and the detection threshold (e.g, 2.5 volts), and where n is the standard deviation of the noise (the square-root of the variance of the noise).

For modulated digital transmission and reception the formula is much the same:
where d is the distance between detected logic level (e.g., T/2) and the threshold (e.g, T/4), and where nd is the standard deviation of the detected noise.

All we need to do is find the detected noise variance nd.

The modulated signal for BPSK becomes
We will also change the signal for the local oscillator in the demodulator as well.

ds(t) X s(t) s cos ct Given the new values for s(t), and the new demodulator, the new values for ds(t) and s become

Since we now have The detection threshold is now at ½.

For BPSK and the same demodulator, we have
The detection threshold remains at zero.

X Now what happens when we pass noise through such a demodulator.
dn(t) X n(t) n cos ct

The output of the demodulator is
The variance of n is the average of the square of n.

Now let us examine

This quantity, the average of the product of a function with itself at a different time, is called the autocorrelation of n(t). We denote the autocorrelation of n(t) by the letter R. If the noise is something called wide-sense stationary, the autocorrelation is only dependent upon the time difference between t and s.

As it turns out, the Fourier transform of the autocorrelation is the power spectral density (this is the Wiener Khinchine theorem). When we take the Fourier transform of Rn(t), we get the power spectral density Sn(w).

The nature of depends upon the nature of the power spectral density of n(t). Suppose n(t) is Gaussian white noise with power spectral density

The inverse Fourier transform of a constant is an impulse function:
So,

So the variance of the detected noise becomes
Using the sifting property of delta functions, we have

We can now calculate the bit error rate for OOK and BPSK:

Now, let us do another variation of OOK and BPSK:

X We have introduced a factor A into the amplitude of the signals.
We shall update the demodulator appropriately: ds(t) X s(t) s A cos ct

The outputs of the demodulator become
OOK BPSK

The bit error rates are now
OOK BPSK

Finally, let us let A = Es. 2/T
OOK BPSK

The bit error rates are now
OOK BPSK

The only significance of letting A=Es is in the energy:
For OOK, we have

For BPSK, we have Thus, Es is the (maximum) signal energy.

Exercise: Suppose we transmitted 5 cos ct for logic one and 15 cos ct for logic zero. Find the maximum signal energy. Design a demodulator and show the detection criterion for the output of the demodulator. Finally, find the bit error-rate.

QPSK OOK and BPSK each deal with two different modulated signals: 0 volts or a sinewave for OOK and two phases of a sinewave for BPSK. In each case, the two modulated signals could be thought of as different in amplitude. QPSK is fundamentally different from OOK and BPSK in that there are four signals in two dimensions to consider.

01 11 00 10

We could switch the sine and the cosine components:
01 11 00 10

The phases are distributed about a circle.
sin ct 01 00 11 cos ct 10

We could start the phase at 45° instead of 0°.
sin ct 01 11 cos ct 00 10

The QPSK signal can be generated as a sum of two BPSK signals: one BPSK signal is sine-modulated, the other BPSK signal is cosine-modulated. The signals o(t) and e(t) are the antipodal (bipolar) versions of the odd and even digital signals. [o(t),e(t) = ±1.]

Based upon the expression for s(t), the QPSK modulator looks like the following.
cos ct odd bits o(t) X + e(t) X even bits sin ct

Now, let us design a demodulator for QPSK.
Since QPSK modulation is like two BPSK modulators, we might guess that QPSK demodulation can be performed using two BPSK demodulators. Let us call the output of the two demodulators s1 and s2.

cos ct o(t) X s1 s(t) e(t) X s2 sin ct

The idea behind this demodulator is that the upper-half demodulates the odd bits and the lower-half demodulates the even bits. The question is will the odd-bit portion of the signal s(t) bleed through to the output of the even-bit (lower-half) portion of the demodulator? [Also, will the even-bit portion of the signal bleed through to the output of the odd-bit (upper-half) portion of the demodulator?] The answer to this question is negative because of something called orthogonality.

We have found that In a similar fashion to the verification of the above statement, we also have (exercise)

Now, what happens when we take the integral of the product of the cosine term and the sine term:

As it turns out (exercise)
Sine and cosine are said to be orthogonal. Thus, the cosine portion of the signal will not bleed through the sine portion of the demodulator and vice-versa.

The outputs s1 and s2 are compared against zero in order to determine the odd bits and the even bits respectively. odd bits even bits

Now, let us begin to calculate the bit error-rate (BER) for QPSK.
To calculate the bit error-rate, let us first see what happens to a noisy signal passing through the demodulator.

cos ct X r1=s1+n1 r(t)=s(t)+n(t) X r2=s2+n2 sin ct

The outputs r1 and r2 can be thought of as noisy versions of s1 and s2.
The values of s1 and s2 are ±1. The quantities r1 and r2 are Gaussian-distributed random variables whose means are ±1. The variances of r1 and r2 are N0/2 (see Slide 40). s1 and s2 can be thought of as horizontal and vertical coordinates. s1 is the output of the “cosine” demodulator and s2 is the output of the “sine” demodulator. Plotting s1 and s2 as “x” and “y” components, we have a familiar-looking diagram.

s2 01 11 1 1 s1 00 10

This diagram is called a constellation diagram
This diagram is called a constellation diagram. The constellation diagram shows magnitudes and phases corresponding to amplitudes and phases of a modulated sinewave, as well as the outputs of the cosine and sine demodulators. [The term constellation diagram comes from its appearance as a star chart. Since r1 and r2 are noisy versions of s1 and s2, the density functions for r1 and r2 are shown on the following slides.

The probability density functions for r1 and r2 can be considered to be horizontal and vertical “slices” of a two-dimensional joint probability density function p(r1,r2). The two-dimensional probability density function is shown on the following slide:

The one-dimensional “slices” corresponding to the probability density functions for r1 and r2 are shown on the following slides.

The peaks of the function p(r1,r2) correspond to the points on the constellation diagram.
01 11 1 1 s1 00 10

By looking at the “slices” and the two-dimensional distributions, we see that there are two noise dimensions. Errors can occur due to “horizontal noise” n1 or due to “vertical noise” n2 . For example, if n1 is sufficiently negative, the constellation point 11 could drift into the 01 area. Similarly, if n2 is sufficiently negative, the constellation point 11 could drift into the 10 area.

r2 01 11 n1 n2 r1 00 10

In order for there to be no error, the horizontal noise (n1) and the vertical noise (n2) must be less than the distance to the boundary.

Let

So, we have This is the probability of error not the bit error-rate! In the case of QPSK, whenever a horizontal or vertical error is made, only one out of two bits are in error.

So for QPSK, This BER has an interesting approximation. If p is small, p2 is smaller still and can be neglected:

By using similar geometric arguments, we can find the bit error-rates for other constellation diagrams. Example: Find the BER for the MODEM with the following constellation diagram: r2 2 1 r1

Let

We can work with more complicated constellation diagrams in the same manner.
Example: Find the BER for the MODEM with the following constellation diagram: d 1 2 3 The only error is horizontal.

The probability of error is different for each constellation point
The probability of error is different for each constellation point. If we transmit a 0, an error could be made if the (horizontal) noise is greater than d/2. If we transmit a 1, an error could be made if the (horizontal) noise is greater than d/2 or if the (horizontal) noise is less than -d/2. Let

If each of the constellation points are equally-likely to be transmitted, then
Finally, the BER is

Example: Find the BER for the MODEM with the following constellation diagram:
1 2 3 4 5 6 7

Here, we have horizontal and vertical errors.
Let We also have

Now, for vertical errors, we have
For no error, we must have no horizontal error and no vertical error:

Similarly, If all of the constellation points are equally-likely, then

Finally, The reason for the factor of 1/3 is that an error (horizontal or vertical) results in only one out of three bits being in error if appropriately coded (exercise).

Exercise: Find the BER for the MODEM with the following constellation diagram:
1 2 3 4 5 6 7 9 8 10 11 12 13 14 15