Download presentation
Presentation is loading. Please wait.
Published byImogene Hall Modified over 7 years ago
1
Digital communications: Fundamentals and Applications
by Bernard Sklar
2
Digital communication system
Important features of a DCS: The transmitter sends a waveform from a finite set of possible waveforms during a limited time The channel distorts, attenuates the transmitted signal The receiver decides which waveform was transmitted given the distorted/noisy received signal. There is a limit to the time it has to do this task. The probability of an erroneous decision is an important measure of system performance Lecture 1
3
Digital versus analog Advantages of digital communications:
Regenerator receiver Different kinds of digital signal are treated identically. Original pulse Regenerated pulse Propagation distance Voice Page 3-4 Data A bit is a bit! Media Lecture 1
4
Classification of signals
Deterministic and random signals Deterministic signal: No uncertainty with respect to the signal value at any time. Random signal: Some degree of uncertainty in signal values before it actually occurs. Thermal noise in electronic circuits due to the random movement of electrons. See my notes on Noise Reflection of radio waves from different layers of ionosphere Interference p14 Lecture 1
5
Classification of signals …
Periodic and non-periodic signals Analog and discrete signals A non-periodic signal A periodic signal A discrete signal Analog signals P14. Discrete exists only at discrete times Lecture 1
6
Classification of signals ..
Energy and power signals A signal is an energy signal if, and only if, it has nonzero but finite energy for all time: A signal is a power signal if, and only if, it has finite but nonzero power for all time: General rule: Periodic and random signals are power signals. Signals that are both deterministic and non-periodic are energy signals. P16. Check last item! Lecture 1
7
Random process Lecture 1
A random process is a collection (ensemble) of time functions, or signals, corresponding to various outcomes of a random experiment. For each outcome, there exists a deterministic function, which is called a sample function or a realization. Random variables time (t) Real number Sample functions or realizations (deterministic function) p22 Lecture 1
8
Random process … Lecture 1
Strictly stationary: If none of the statistics of the random process are affected by a shift in the time origin. Wide sense stationary (WSS): If the mean and autocorrelation functions do not change with a shift in the origin time. Cyclostationary: If the mean and autocorrelation functions are periodic in time. Ergodic process: A random process is ergodic in mean and autocorrelation, if and respectively. In other words, you get the same result from averaging over the ensemble or over all time. Strict? Lecture 1
9
Autocorrelation Autocorrelation of an energy signal
Autocorrelation of a power signal For a periodic signal: Autocorrelation of a random signal For a WSS process: Lecture 1
10
Spectral density Energy signals: Power signals: Random process:
Energy spectral density (ESD): Power signals: Power spectral density (PSD): Random process: Lecture 1
11
Properties of an autocorrelation function
For real-valued (and WSS in case of random signals): Autocorrelation and spectral density form a Fourier transform pair. – see Linear systems, noise Autocorrelation is symmetric around zero. Its maximum value occurs at the origin. Its value at the origin is equal to the average power or energy. Lecture 1
12
Noise in communication systems
Thermal noise is described by a zero-mean, Gaussian random process, n(t). Its PSD is flat, hence, it is called white noise. is the Standard Deviation and 2 is the Variance of the random process. [w/Hz] Power spectral density Autocorrelation function Probability density function Lecture 1
13
Signal transmission through linear systems
Deterministic signals: Random signals: Ideal distortion less transmission: All the frequency components of the signal not only arrive with an identical time delay, but also are amplified or attenuated equally. AKA “Linear Phase” or “”Constant group Delay”. Input Output Linear system - see my notes Lecture 1
14
Signal transmission … - cont’d
Ideal filters: Realizable filters: RC filters Butterworth filter Low-pass Non-causal! Band-pass High-pass Lecture 1
15
Bandwidth of signal Baseband versus bandpass: Bandwidth dilemma:
Bandlimited signals are not realizable! Realizable signals have infinite bandwidth! We approximate “Band-Limited” in our analysis! Baseband signal Bandpass Local oscillator Lecture 1
16
Bandwidth of signal … Different definition of bandwidth: Lecture 1
Half-power bandwidth Noise equivalent bandwidth Null-to-null bandwidth Fractional power containment bandwidth Bounded power spectral density Absolute bandwidth (a) (b) (c) (d) (e)50dB Lecture 1
17
Formatting and transmission of baseband signal
A Digital Communication System Encode Transmit Pulse modulate Sample Quantize Demodulate/ Detect Channel Receive Low-pass filter Decode waveforms Bit stream Format Digital info. Textual info. Analog source sink Lecture 2
18
Format analog signals To transform an analog waveform into a form that is compatible with a digital communication system, the following steps are taken: Sampling – See my notes on Sampling Quantization and encoding Baseband transmission Lecture 2
19
Sampling See my notes on Fourier Series, Fourier Transform and Sampling Time domain Frequency domain Lecture 2
20
Aliasing effect LP filter Nyquist rate aliasing Lecture 2
21
modulated (PAM) signal
Sampling theorem Sampling theorem: A band-limited signal with no spectral components beyond , can be uniquely determined by values sampled at uniform intervals of The sampling rate, is called the Nyquist rate. Sampling process Analog signal Pulse amplitude modulated (PAM) signal Lecture 2
22
Quantization Amplitude quantizing: Mapping samples of a continuous amplitude waveform to a finite set of amplitudes. In Out Average quantization noise power Signal peak power Signal power to average quantization noise power Quantized values Lecture 2
23
Encoding (PCM) A uniform linear quantizer is called Pulse Code Modulation (PCM). Pulse code modulation (PCM): Encoding the quantized signals into a digital word (PCM word or codeword). Each quantized sample is digitally encoded into an l bits codeword where L in the number of quantization levels and Lecture 2
24
Quantization example Lecture 2 Quant. levels boundaries
amplitude x(t) x(nTs): sampled values xq(nTs): quantized values boundaries Quant. levels Ts: sampling time t PCM codeword PCM sequence Lecture 2
25
The Noise Model is an approximation!
Quantization error Quantizing error: The difference between the input and output of a quantizer + AGC Qauntizer Process of quantizing noise Model of quantizing noise The Noise Model is an approximation! Lecture 2
26
Quantization error … Quantizing error: Quantization noise variance:
Granular or linear errors happen for inputs within the dynamic range of quantizer Saturation errors happen for inputs outside the dynamic range of quantizer Saturation errors are larger than linear errors (AKA as “Overflow” or “Clipping”) Saturation errors can be avoided by proper tuning of AGC Saturation errors need to be handled by Overflow Detection! Quantization noise variance: Uniform q. Lecture 2
27
Uniform and non-uniform quant.
Uniform (linear) quantizing: No assumption about amplitude statistics and correlation properties of the input. Not using the user-related specifications Robust to small changes in input statistic by not finely tuned to a specific set of input parameters Simple implementation Application of linear quantizer: Signal processing, graphic and display applications, process control applications Non-uniform quantizing: Using the input statistics to tune quantizer parameters Larger SNR than uniform quantizing with same number of levels Non-uniform intervals in the dynamic range with same quantization noise variance Application of non-uniform quantizer: Commonly used for speech Examples are -law (US) and A-law (international) Lecture 2
28
Non-uniform quantization
It is achieved by uniformly quantizing the “compressed” signal. (actually, modern A/D converters use Uniform quantizing at bits and compand digitally) At the receiver, an inverse compression characteristic, called “expansion” is employed to avoid signal distortion. compression+expansion companding Compress Qauntize Expand Channel Transmitter Receiver Lecture 2
29
Statistics of speech amplitudes
In speech, weak signals are more frequent than strong ones. Using equal step sizes (uniform quantizer) gives low for weak signals and high for strong signals. Adjusting the step size of the quantizer by taking into account the speech statistics improves the average SNR for the input range. 0.0 1.0 0.5 2.0 3.0 Normalized magnitude of speech signal Probability density function Lecture 2
30
Baseband transmission
To transmit information through physical channels, PCM sequences (codewords) are transformed to pulses (waveforms). Each waveform carries a symbol from a set of size M. Each transmit symbol represents bits of the PCM words. PCM waveforms (line codes) are used for binary symbols (M=2). M-ary pulse modulation are used for non-binary symbols (M>2). Lecture 2
31
PCM waveforms PCM waveforms category: Nonreturn-to-zero (NRZ)
Return-to-zero (RZ) Phase encoded Multilevel binary T 2T 3T 4T 5T +V -V NRZ-L Unipolar-RZ Bipolar-RZ Manchester Miller Dicode NRZ Lecture 2
32
PCM waveforms … Criteria for comparing and selecting PCM waveforms:
Spectral characteristics (power spectral density and bandwidth efficiency) Bit synchronization capability Error detection capability Interference and noise immunity Implementation cost and complexity Lecture 2
33
Spectra of PCM waveforms
Lecture 2
34
M-ary pulse modulation
M-ary pulse modulations category: M-ary pulse-amplitude modulation (PAM) M-ary pulse-position modulation (PPM) M-ary pulse-duration modulation (PDM) M-ary PAM is a multi-level signaling where each symbol takes one of the M allowable amplitude levels, each representing bits of PCM words. For a given data rate, M-ary PAM (M>2) requires less bandwidth than binary PCM. For a given average pulse power, binary PCM is easier to detect than M-ary PAM (M>2). Lecture 2
35
PAM example Lecture 2
36
Formatting and transmission of baseband signal
Digital info. Bit stream (Data bits) Pulse waveforms (baseband signals) Information (data) rate: Symbol rate : For real time transmission: Textual info. Format source Pulse modulate Sample Quantize Encode Analog info. Sampling at rate (sampling time=Ts) Encoding each q. value to bits (Data bit duration Tb=Ts/l) Quantizing each sampled value to one of the L levels in quantizer. Mapping every data bits to a symbol out of M symbols and transmitting a baseband waveform with duration T Lecture 3
37
Quantization example Lecture 3 Quant. levels boundaries
amplitude x(t) x(nTs): sampled values xq(nTs): quantized values boundaries Quant. levels Ts: sampling time t PCM codeword PCM sequence Lecture 3
38
Example of M-ary PAM Assuming real time transmission and equal energy per transmission data bit for binary-PAM and 4-ary PAM: 4-ary: T=2Tb and Binary: T=Tb Binary PAM (rectangular pulse) 4-ary PAM (rectangular pulse) 3B A. ‘1’ ‘11’ B T T ‘01’ T T ‘10’ ‘00’ T T ‘0’ -B -A. -3B Lecture 3
39
Example of M-ary PAM … Lecture 3 2.2762 V 1.3657 V 1 1 0 1 0 1
Ts Ts V V 0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb Rb=1/Tb=3/Ts R=1/T=1/Tb=3/Ts 0 T T 3T 4T 5T 6T Rb=1/Tb=3/Ts R=1/T=1/2Tb=3/2Ts=1.5/Ts T T T Lecture 3
40
Today we are going to talk about:
Receiver structure Demodulation (and sampling) Detection First step for designing the receiver Matched filter receiver Correlator receiver Lecture 3
41
Demodulation and detection
Format Pulse modulate Bandpass modulate M-ary modulation channel transmitted symbol Major sources of errors: Thermal noise (AWGN) disturbs the signal in an additive fashion (Additive) has flat spectral density for all frequencies of interest (White) is modeled by Gaussian random process (Gaussian Noise) Inter-Symbol Interference (ISI) Due to the filtering effect of transmitter, channel and receiver, symbols are “smeared”. estimated symbol Format Detect Demod. & sample Lecture 3
42
Example: Impact of the channel
Lecture 3
43
Example: Channel impact …
Lecture 3
44
Receiver tasks Demodulation and sampling: Detection:
Waveform recovery and preparing the received signal for detection: Improving the signal power to the noise power (SNR) using matched filter Reducing ISI using equalizer Sampling the recovered waveform Detection: Estimate the transmitted symbol based on the received sample Lecture 3
45
Receiver structure Lecture 3
Step 1 – waveform to sample transformation Step 2 – decision making Demodulate & Sample Detect Threshold comparison Frequency down-conversion Receiving filter Equalizing filter Compensation for channel induced ISI For bandpass signals Received waveform Baseband pulse (possibly distored) Baseband pulse Sample (test statistic) Lecture 3
46
Baseband and bandpass Bandpass model of detection process is equivalent to baseband model because: The received bandpass waveform is first transformed to a baseband waveform. Equivalence theorem: Performing bandpass linear signal processing followed by heterodyning the signal to the baseband, yields the same results as heterodyning the bandpass signal to the baseband , followed by a baseband linear signal processing. Lecture 3
47
Steps in designing the receiver
Find optimum solution for receiver design with the following goals: Maximize SNR Minimize ISI Steps in design: Model the received signal Find separate solutions for each of the goals. First, we focus on designing a receiver which maximizes the SNR. Lecture 3
48
Design the receiver filter to maximize the SNR
Model the received signal Simplify the model: Received signal in AWGN AWGN Ideal channels AWGN Lecture 3
49
Matched filter receiver
Problem: Design the receiver filter such that the SNR is maximized at the sampling time when is transmitted. Solution: The optimum filter, is the Matched filter, given by which is the time-reversed and delayed version of the conjugate of the transmitted signal T t T t Lecture 3
50
Example of matched filter
T 2T t T/2 T t T/2 T T t T/2 T 3T/2 2T t Lecture 3
51
Properties of the matched filter
The Fourier transform of a matched filter output with the matched signal as input is, except for a time delay factor, proportional to the ESD of the input signal. The output signal of a matched filter is proportional to a shifted version of the autocorrelation function of the input signal to which the filter is matched. The output SNR of a matched filter depends only on the ratio of the signal energy to the PSD of the white noise at the filter input. Two matching conditions in the matched-filtering operation: spectral phase matching that gives the desired output peak at time T. spectral amplitude matching that gives optimum SNR to the peak value. Lecture 3
52
Correlator receiver The matched filter output at the sampling time, can be realized as the correlator output. Lecture 3
53
Implementation of matched filter receiver
Bank of M matched filters Matched filter output: Observation vector Lecture 3
54
Implementation of correlator receiver
Bank of M correlators Correlators output: Observation vector Lecture 3
55
Implementation example of matched filter receivers
Bank of 2 matched filters T t T T T t Lecture 3
56
Receiver job Demodulation and sampling: Detection:
Waveform recovery and preparing the received signal for detection: Improving the signal power to the noise power (SNR) using matched filter Reducing ISI using equalizer Sampling the recovered waveform Detection: Estimate the transmitted symbol based on the received sample Lecture 4
57
Receiver structure Digital Receiver Lecture 4
Step 1 – waveform to sample transformation Step 2 – decision making Demodulate & Sample Detect Threshold comparison Frequency down-conversion Receiving filter Equalizing filter Compensation for channel induced ISI For bandpass signals Received waveform Baseband pulse (possibly distored) Baseband pulse Sample (test statistic) Lecture 4
58
Implementation of matched filter receiver
Bank of M matched filters Matched filter output: Observation vector Lecture 4
59
Implementation of correlator receiver
Bank of M correlators Correlators output: Observation vector Lecture 4
60
Today, we are going to talk about:
Detection: Estimate the transmitted symbol based on the received sample Signal space used for detection Orthogonal N-dimensional space Signal to waveform transformation and vice versa Lecture 4
61
Signal space What is a signal space? Why do we need a signal space?
Vector representations of signals in an N-dimensional orthogonal space Why do we need a signal space? It is a means to convert signals to vectors and vice versa. It is a means to calculate signals energy and Euclidean distances between signals. Why are we interested in Euclidean distances between signals? For detection purposes: The received signal is transformed to a received vector. The signal which has the minimum Euclidean distance to the received signal is estimated as the transmitted signal. Lecture 4
62
Schematic example of a signal space
Transmitted signal alternatives Received signal at matched filter output Lecture 4
63
Signal space To form a signal space, first we need to know the inner product between two signals (functions): Inner (scalar) product: Properties of inner product: Analogous to the “dot” product of discrete n-space vectors = cross-correlation between x(t) and y(t) Lecture 4
64
Signal space … The distance in signal space is measure by calculating the norm. What is norm? Norm of a signal: Norm between two signals: We refer to the norm between two signals as the Euclidean distance between two signals. = “length” or amplitude of x(t) Lecture 4
65
Example of distances in signal space
The Euclidean distance between signals z(t) and s(t): Lecture 4
66
Orthogonal signal space
N-dimensional orthogonal signal space is characterized by N linearly independent functions called basis functions. The basis functions must satisfy the orthogonality condition where If all , the signal space is orthonormal. See my notes on Fourier Series Lecture 4
67
Example of an orthonormal basis
Example: 2-dimensional orthonormal signal space Example: 1-dimensional orthonormal signal space T t Lecture 4
68
Signal space … Any arbitrary finite set of waveforms
where each member of the set is of duration T, can be expressed as a linear combination of N orthonogal waveforms where where Vector representation of waveform Waveform energy Lecture 4
69
Signal space … Lecture 4 Waveform to vector conversion
Vector to waveform conversion Lecture 4
70
Example of projecting signals to an orthonormal signal space
Transmitted signal alternatives Lecture 4
71
Signal space – cont’d To find an orthonormal basis functions for a given set of signals, the Gram-Schmidt procedure can be used. Gram-Schmidt procedure: Given a signal set , compute an orthonormal basis 1. Define 2. For compute If let If , do not assign any basis function. 3. Renumber the basis functions such that basis is This is only necessary if for any i in step 2. Note that Lecture 4
72
Example of Gram-Schmidt procedure
Find the basis functions and plot the signal space for the following transmitted signals: Using Gram-Schmidt procedure: T t T t T t 1 2 -A A Lecture 4
73
Implementation of the matched filter receiver
Bank of N matched filters Observation vector Lecture 4
74
Implementation of the correlator receiver
Bank of N correlators Observation vector Lecture 4
75
Example of matched filter receivers using basic functions
T t T t 1 matched filter T t Number of matched filters (or correlators) is reduced by 1 compared to using matched filters (correlators) to the transmitted signal. Reduced number of filters (or correlators) Lecture 4
76
White noise in the orthonormal signal space
AWGN, n(t), can be expressed as Noise projected on the signal space which impacts the detection process. Noise outside of the signal space Vector representation of independent zero-mean Gaussain random variables with variance Lecture 4
77
Detection of signal in AWGN
Detection problem: Given the observation vector , perform a mapping from to an estimate of the transmitted symbol, , such that the average probability of error in the decision is minimized. Modulator Decision rule Lecture 5
78
Statistics of the observation Vector
AWGN channel model: Signal vector is deterministic. Elements of noise vector are i.i.d Gaussian random variables with zero-mean and variance The noise vector pdf is The elements of observed vector are independent Gaussian random variables. Its pdf is Lecture 5
79
Detection Optimum decision rule (maximum a posteriori probability):
Applying Bayes’ rule gives: Lecture 5
80
Detection … Partition the signal space into M decision regions, such that Lecture 5
81
Detection (ML rule) For equal probable symbols, the optimum decision rule (maximum posteriori probability) is simplified to: or equivalently: which is known as maximum likelihood. Lecture 5
82
Detection (ML)… Partition the signal space into M decision regions, .
Restate the maximum likelihood decision rule as follows: Lecture 5
83
Detection rule (ML)… It can be simplified to: or equivalently:
Lecture 5
84
Maximum likelihood detector block diagram
Choose the largest Lecture 5
85
Schematic example of the ML decision regions
Lecture 5
86
Average probability of symbol error
Erroneous decision: For the transmitted symbol or equivalently signal vector , an error in decision occurs if the observation vector does not fall inside region . Probability of erroneous decision for a transmitted symbol or equivalently Probability of correct decision for a transmitted symbol Lecture 5
87
Av. prob. of symbol error …
Average probability of symbol error : For equally probable symbols: Lecture 5
88
Example for binary PAM Lecture 5
89
Union bound Union bound
The probability of a finite union of events is upper bounded by the sum of the probabilities of the individual events. Let denote that the observation vector is closer to the symbol vector than , when is transmitted. depends only on and Applying Union bounds yields Lecture 5
90
Example of union bound Union bound: Lecture 5
91
Upper bound based on minimum distance
Minimum distance in the signal space: Lecture 5
92
Example of upper bound on av. Symbol error prob. based on union bound
Lecture 5
93
Eb/No figure of merit in digital communications
SNR or S/N is the average signal power to the average noise power. SNR should be modified in terms of bit-energy in DCS, because: Signals are transmitted within a symbol duration and hence, are energy signal (zero power). A merit at bit-level facilitates comparison of different DCSs transmitting different number of bits per symbol. : Bit rate : Bandwidth Lecture 5
94
Example of Symbol error prob. For PAM signals
Binary PAM 4-ary PAM T t Lecture 5
95
Inter-Symbol Interference (ISI)
ISI in the detection process due to the filtering effects of the system Overall equivalent system transfer function creates echoes and hence time dispersion causes ISI at sampling time Lecture 6
96
Inter-symbol interference
Baseband system model Equivalent model Tx filter Channel Rx. filter Detector Equivalent system Detector filtered noise Lecture 6
97
Nyquist bandwidth constraint
The theoretical minimum required system bandwidth to detect Rs [symbols/s] without ISI is Rs/2 [Hz]. Equivalently, a system with bandwidth W=1/2T=Rs/2 [Hz] can support a maximum transmission rate of 2W=1/T=Rs [symbols/s] without ISI. Bandwidth efficiency, R/W [bits/s/Hz] : An important measure in DCs representing data throughput per hertz of bandwidth. Showing how efficiently the bandwidth resources are used by signaling techniques. Lecture 6
98
Ideal Nyquist pulse (filter)
Ideal Nyquist filter Ideal Nyquist pulse Lecture 6
99
Nyquist pulses (filters)
Pulses (filters) which results in no ISI at the sampling time. Nyquist filter: Its transfer function in frequency domain is obtained by convolving a rectangular function with any real even-symmetric frequency function Nyquist pulse: Its shape can be represented by a sinc(t/T) function multiply by another time function. Example of Nyquist filters: Raised-Cosine filter Lecture 6
100
Pulse shaping to reduce ISI
Goals and trade-off in pulse-shaping Reduce ISI Efficient bandwidth utilization Robustness to timing error (small side lobes) Lecture 6
101
The raised cosine filter
A Nyquist pulse (No ISI at the sampling time) Roll-off factor Excess bandwidth: Lecture 6
102
The Raised cosine filter – cont’d
1 1 0.5 0.5 Lecture 6
103
Pulse shaping and equalization to remove ISI
No ISI at the sampling time Square-Root Raised Cosine (SRRC) filter and Equalizer Taking care of ISI caused by tr. filter Taking care of ISI caused by channel Lecture 6
104
Example of pulse shaping
Square-root Raised-Cosine (SRRC) pulse shaping Amp. [V] Baseband tr. Waveform Third pulse t/T First pulse Second pulse Data symbol Lecture 6
105
Example of pulse shaping …
Raised Cosine pulse at the output of matched filter Amp. [V] Baseband received waveform at the matched filter output (zero ISI) t/T Lecture 6
106
Eye pattern Eye pattern:Display on an oscilloscope which sweeps the system response to a baseband signal at the rate 1/T (T symbol duration) Distortion due to ISI Noise margin amplitude scale Sensitivity to timing error Timing jitter time scale Lecture 6
107
Example of eye pattern: Binary-PAM, SRRQ pulse
Perfect channel (no noise and no ISI) Lecture 6
108
Example of eye pattern: Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=20 dB) and no ISI Lecture 6
109
Example of eye pattern: Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=10 dB) and no ISI Lecture 6
110
Equalization – cont’d Lecture 6
Step 1 – waveform to sample transformation Step 2 – decision making Demodulate & Sample Detect Threshold comparison Frequency down-conversion Receiving filter Equalizing filter Compensation for channel induced ISI For bandpass signals Received waveform Baseband pulse (possibly distored) Baseband pulse Sample (test statistic) Lecture 6
111
Non-constant amplitude
Equalization ISI due to filtering effect of the communications channel (e.g. wireless channels) Channels behave like band-limited filters Non-constant amplitude Amplitude distortion Non-linear phase Phase distortion Lecture 6
112
Equalization: Channel examples
Example of a frequency selective, slowly changing (slow fading) channel for a user at 35 km/h Lecture 6
113
Equalization: Channel examples …
Example of a frequency selective, fast changing (fast fading) channel for a user at 35 km/h Lecture 6
114
Example of eye pattern with ISI: Binary-PAM, SRRQ pulse
Non-ideal channel and no noise Lecture 6
115
Example of eye pattern with ISI: Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=20 dB) and ISI Lecture 6
116
Example of eye pattern with ISI: Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=10 dB) and ISI Lecture 6
117
Equalizing filters … Baseband system model Equivalent model Lecture 6
Tx filter Channel Equalizer Rx. filter Detector Equivalent system Equalizer Detector filtered noise Lecture 6
118
Equalization – cont’d Equalization using
MLSE (Maximum likelihood sequence estimation) Filtering – See notes on z-Transform and Digital Filters Transversal filtering Zero-forcing equalizer Minimum mean square error (MSE) equalizer Decision feedback Using the past decisions to remove the ISI contributed by them Adaptive equalizer Lecture 6
119
Equalization by transversal filtering
A weighted tap delayed line that reduces the effect of ISI by proper adjustment of the filter taps. Coeff. adjustment Lecture 6
120
Transversal equalizing filter …
Zero-forcing equalizer: The filter taps are adjusted such that the equalizer output is forced to be zero at N sample points on each side: Mean Square Error (MSE) equalizer: The filter taps are adjusted such that the MSE of ISI and noise power at the equalizer output is minimized. Adjust Adjust Lecture 6
121
Example of equalizer Lecture 6
2-PAM with SRRQ Non-ideal channel One-tap DFE Matched filter outputs at the sampling time ISI-no noise, No equalizer ISI-no noise, DFE equalizer ISI- noise No equalizer ISI- noise DFE equalizer Lecture 6
122
Block diagram of a DCS Source encode Channel encode Pulse modulate
Bandpass modulate Format Digital modulation Channel Digital demodulation Source decode Channel decode Demod. Sample Format Detect Lecture 7
123
Bandpass modulation Bandpass modulation: The process of converting a data signal to a sinusoidal waveform where its amplitude, phase or frequency, or a combination of them, are varied in accordance with the transmitting data. Bandpass signal: where is the baseband pulse shape with energy We assume here (otherwise will be stated): is a rectangular pulse shape with unit energy. Gray coding is used for mapping bits to symbols. denotes average symbol energy given by Lecture 7
124
Demodulation and detection
Demodulation: The receiver signal is converted to baseband, filtered and sampled. Detection: Sampled values are used for detection using a decision rule such as the ML detection rule. Decision circuits (ML detector) Lecture 7
125
Coherent detection Coherent detection
requires carrier phase recovery at the receiver and hence, circuits to perform phase estimation. Sources of carrier-phase mismatch at the receiver: Propagation delay causes carrier-phase offset in the received signal. The oscillators at the receiver which generate the carrier signal, are not usually phased locked to the transmitted carrier. Lecture 7
126
Coherent detection .. Circuits such as Phase-Locked-Loop (PLL) are implemented at the receiver for carrier phase estimation ( ). I branch PLL Oscillator 90 deg. Used by correlators Q branch Lecture 7
127
Bandpass Modulation Schemes
One dimensional waveforms Amplitude Shift Keying (ASK) M-ary Pulse Amplitude Modulation (M-PAM) Two dimensional waveforms M-ary Phase Shift Keying (M-PSK) M-ary Quadrature Amplitude Modulation (M-QAM) Multidimensional waveforms M-ary Frequency Shift Keying (M-FSK) Lecture 7
128
One dimensional modulation, demodulation and detection
Amplitude Shift Keying (ASK) modulation: “0” “1” On-off keying (M=2): Lecture 7
129
One dimensional mod.,… M-ary Pulse Amplitude modulation (M-PAM)
“00” “01” “11” “10” Lecture 7
130
Example of bandpass modulation: Binary PAM
Lecture 7
131
One dimensional mod.,...–cont’d
Coherent detection of M-PAM ML detector (Compare with M-1 thresholds) Lecture 7
132
Two dimensional modulation, demodulation and detection (M-PSK)
M-ary Phase Shift Keying (M-PSK) Lecture 7
133
Two dimensional mod.,… (MPSK)
BPSK (M=2) “0” “1” 8PSK (M=8) “110” “000” “001” “011” “010” “101” “111” “100” QPSK (M=4) “00” “11” “10” “01” Lecture 7
134
Two dimensional mod.,…(MPSK)
Coherent detection of MPSK Compute Choose smallest Lecture 7
135
Two dimensional mod.,… (M-QAM)
M-ary Quadrature Amplitude Mod. (M-QAM) Lecture 7
136
Two dimensional mod.,… (M-QAM)
“0000” “0001” “0011” “0010” 1 3 -1 -3 “1000” “1001” “1011” “1010” “1100” “1101” “1111” “1110” “0100” “0101” “0111” “0110” 16-QAM Lecture 7
137
Two dimensional mod.,… (M-QAM)
Coherent detection of M-QAM ML detector Parallel-to-serial converter ML detector Lecture 7
138
Multi-dimensional modulation, demodulation & detection
M-ary Frequency Shift keying (M-FSK) Lecture 7
139
Multi-dimensional mod.,…(M-FSK)
ML detector: Choose the largest element in the observed vector Lecture 7
140
Non-coherent detection
No need for a reference in phase with the received carrier Less complexity compared to coherent detection at the price of higher error rate. Lecture 7
141
Non-coherent detection …
Differential coherent detection Differential encoding of the message The symbol phase changes if the current bit is different from the previous bit. Symbol index: Data bits: Diff. encoded bits Symbol phase: Lecture 7
142
Non-coherent detection …
Coherent detection for diff encoded mod. assumes slow variation in carrier-phase mismatch during two symbol intervals. correlates the received signal with basis functions uses the phase difference between the current received vector and previously estimated symbol Lecture 7
143
Non-coherent detection …
Optimum differentially coherent detector Sub-optimum differentially coherent detector Performance degradation about 3 dB by using sub- optimal detector Decision Delay T Decision Delay T Lecture 7
144
Non-coherent detection …
Energy detection Non-coherent detection for orthogonal signals (e.g. M- FSK) Carrier-phase offset causes partial correlation between I and Q branches for each candidate signal. The received energy corresponding to each candidate signal is used for detection. Lecture 7
145
Non-coherent detection …
Non-coherent detection of BFSK Decision stage: + - Lecture 7
146
Example of two dim. modulation
“110” “000” “001” “011” “010” “101” “111” “100” 16QAM “0000” “0001” “0011” “0010” 1 3 -1 -3 “1000” “1001” “1011” “1010” “1100” “1101” “1111” “1110” “0100” “0101” “0111” “0110” 8PSK “00” “11” “10” “01” QPSK Lecture 8
147
Today, we are going to talk about:
How to calculate the average probability of symbol error for different modulation schemes that we studied? How to compare different modulation schemes based on their error performances? Lecture 8
148
Error probability of bandpass modulation
Before evaluating the error probability, it is important to remember that: The type of modulation and detection ( coherent or non- coherent) determines the structure of the decision circuits and hence the decision variable, denoted by z. The decision variable, z, is compared with M-1 thresholds, corresponding to M decision regions for detection purposes. Decision Circuits Compare z with threshold. Lecture 8
149
Error probability … The matched filters output (observation vector= ) is the detector input and the decision variable is a function of , i.e. For MPAM, MQAM and MFSK with coherent detection For MPSK with coherent detection For non-coherent detection (M-FSK and DPSK), We know that for calculating the average probability of symbol error, we need to determine Hence, we need to know the statistics of z, which depends on the modulation scheme and the detection type. Lecture 8
150
Error probability … AWGN channel model:
The signal vector is deterministic. The elements of the noise vector are i.i.d Gaussian random variables with zero-mean and variance The noise vector's pdf is The elements of the observed vector are independent Gaussian random variables. Its pdf is Lecture 8
151
Error probability … BPSK and BFSK with coherent detection: Lecture 8
“0” “1” “0” “1” BPSK BFSK Lecture 8
152
Error probability … Non-coherent detection of BFSK Lecture 8
Decision variable: Difference of envelopes + - Decision rule: Lecture 8
153
Error probability – cont’d
Non-coherent detection of BFSK … Similarly, non-coherent detection of DBPSK Rician pdf Rayleigh pdf Lecture 8
154
(Compare with M-1 thresholds)
Error probability …. Coherent detection of M-PAM Decision variable: “00” “01” “11” “10” 4-PAM ML detector (Compare with M-1 thresholds) Lecture 8
155
Error probability …. Coherent detection of M-PAM …. Lecture 8
Error happens if the noise, , exceeds in amplitude one-half of the distance between adjacent symbols. For symbols on the border, error can happen only in one direction. Hence: Gaussian pdf with zero mean and variance Lecture 8
156
Error probability … Coherent detection of M-QAM Lecture 8 ML detector
“0000” “0001” “0011” “0010” “1000” “1001” “1011” “1010” “1100” “1101” “1111” “1110” “0100” “0101” “0111” “0110” 16-QAM ML detector Parallel-to-serial converter Lecture 8
157
Average probability of
Error probability … Coherent detection of M-QAM … M-QAM can be viewed as the combination of two modulations on I and Q branches, respectively. No error occurs if no error is detected on either the I or the Q branch. Considering the symmetry of the signal space and the orthogonality of the I and Q branches: Average probability of symbol error for Lecture 8
158
Error probability … Coherent detection of MPSK Lecture 8 Compute
“110” “000” “001” “011” “010” “101” “111” “100” 8-PSK Compute Choose smallest Decision variable Lecture 8
159
Error probability … Coherent detection of MPSK … Lecture 8
The detector compares the phase of observation vector to M-1 thresholds. Due to the circular symmetry of the signal space, we have: where It can be shown that or Lecture 8
160
Error probability … Coherent detection of M-FSK Lecture 8 ML detector:
Choose the largest element in the observed vector Lecture 8
161
Error probability … Coherent detection of M-FSK …
The dimension of the signal space is M. An upper bound for the average symbol error probability can be obtained by using the union bound. Hence: or, equivalently Lecture 8
162
Bit error probability versus symbol error probability
Number of bits per symbol For orthogonal M-ary signaling (M-FSK) For M-PSK, M-PAM and M-QAM Lecture 8
163
Probability of symbol error for binary modulation
Note! “The same average symbol energy for different sizes of signal space” Lecture 8
164
Probability of symbol error for M-PSK
Note! “The same average symbol energy for different sizes of signal space” Lecture 8
165
Probability of symbol error for M-FSK
Note! “The same average symbol energy for different sizes of signal space” Lecture 8
166
Probability of symbol error for M-PAM
Note! “The same average symbol energy for different sizes of signal space” Lecture 8
167
Probability of symbol error for M-QAM
Note! “The same average symbol energy for different sizes of signal space” Lecture 8
168
Example of samples of matched filter output for some bandpass modulation schemes
Lecture 8
169
Block diagram of a DCS Source encode Channel encode Pulse modulate
Bandpass modulate Format Digital modulation Channel Digital demodulation Source decode Channel decode Demod. Sample Format Detect Lecture 9
170
What is channel coding? Channel coding:
Transforming signals to improve communications performance by increasing the robustness against channel impairments (noise, interference, fading, ...) Waveform coding: Transforming waveforms to better waveforms Structured sequences: Transforming data sequences into better sequences, having structured redundancy. -“Better” in the sense of making the decision process less subject to errors. Lecture 9
171
Error control techniques
Automatic Repeat reQuest (ARQ) Full-duplex connection, error detection codes The receiver sends feedback to the transmitter, saying that if any error is detected in the received packet or not (Not-Acknowledgement (NACK) and Acknowledgement (ACK), respectively). The transmitter retransmits the previously sent packet if it receives NACK. Forward Error Correction (FEC) Simplex connection, error correction codes The receiver tries to correct some errors Hybrid ARQ (ARQ+FEC) Full-duplex, error detection and correction codes Lecture 9
172
Why using error correction coding?
Error performance vs. bandwidth Power vs. bandwidth Data rate vs. bandwidth Capacity vs. bandwidth A F B D C E Uncoded Coded Coding gain: For a given bit-error probability, the reduction in the Eb/N0 that can be realized through the use of code: Lecture 9
173
Channel models Discrete memory-less channels Binary Symmetric channels
Discrete input, discrete output Binary Symmetric channels Binary input, binary output Gaussian channels Discrete input, continuous output Lecture 9
174
Linear block codes Let us review some basic definitions first that are useful in understanding Linear block codes. Lecture 9
175
Some definitions Binary field :
The set {0,1}, under modulo 2 binary addition and multiplication forms a field. Binary field is also called Galois field, GF(2). Addition Multiplication Lecture 9
176
Some definitions… Fields :
Let F be a set of objects on which two operations ‘+’ and ‘.’ are defined. F is said to be a field if and only if F forms a commutative group under + operation. The additive identity element is labeled “0”. F-{0} forms a commutative group under . Operation. The multiplicative identity element is labeled “1”. The operations “+” and “.” are distributive: Lecture 9
177
Some definitions… Vector space:
Let V be a set of vectors and F a fields of elements called scalars. V forms a vector space over F if: 1. Commutative: 2. 3. Distributive: 4. Associative: 5. Lecture 9
178
Some definitions… Vector subspace: Examples of vector spaces
The set of binary n-tuples, denoted by Vector subspace: A subset S of the vector space is called a subspace if: The all-zero vector is in S. The sum of any two vectors in S is also in S. Example: Lecture 9
179
Some definitions… Spanning set: Bases:
A collection of vectors , is said to be a spanning set for V or to span V if linear combinations of the vectors in G include all vectors in the vector space V, Example: Bases: The spanning set of V that has minimal cardinality is called the basis for V. Cardinality of a set is the number of objects in the set. Lecture 9
180
Linear block codes Linear block code (n,k)
A set with cardinality is called a linear block code if, and only if, it is a subspace of the vector space . Members of C are called code-words. The all-zero codeword is a codeword. Any linear combination of code-words is a codeword. Lecture 9
181
Linear block codes – cont’d
Bases of C mapping Lecture 9
182
Linear block codes – cont’d
The information bit stream is chopped into blocks of k bits. Each block is encoded to a larger block of n bits. The coded bits are modulated and sent over the channel. The reverse procedure is done at the receiver. Data block Channel encoder Codeword k bits n bits Lecture 9
183
Linear block codes – cont’d
The Hamming weight of the vector U, denoted by w(U), is the number of non-zero elements in U. The Hamming distance between two vectors U and V, is the number of elements in which they differ. The minimum distance of a block code is Lecture 9
184
Linear block codes – cont’d
Error detection capability is given by Error correcting-capability t of a code is defined as the maximum number of guaranteed correctable errors per codeword, that is Lecture 9
185
Linear block codes – cont’d
For memory less channels, the probability that the decoder commits an erroneous decoding is is the transition probability or bit error probability over channel. The decoded bit error probability is Lecture 9
186
Linear block codes – cont’d
Discrete, memoryless, symmetric channel model Note that for coded systems, the coded bits are modulated and transmitted over the channel. For example, for M-PSK modulation on AWGN channels (M>2): where is energy per coded bit, given by 1 1-p 1 p Tx. bits Rx. bits p 1-p Lecture 9
187
Linear block codes –cont’d
A matrix G is constructed by taking as its rows the vectors of the basis, Bases of C mapping Lecture 9
188
Linear block codes – cont’d
Encoding in (n,k) block code The rows of G are linearly independent. Lecture 9
189
Linear block codes – cont’d
Example: Block code (6,3) Message vector Codeword Lecture 9
190
Linear block codes – cont’d
Systematic block code (n,k) For a systematic code, the first (or last) k elements in the codeword are information bits. Lecture 9
191
Linear block codes – cont’d
For any linear code we can find a matrix , such that its rows are orthogonal to the rows of : H is called the parity check matrix and its rows are linearly independent. For systematic linear block codes: Lecture 9
192
Linear block codes – cont’d
Syndrome testing: S is the syndrome of r, corresponding to the error pattern e. Format Channel encoding Modulation decoding Demodulation Detection Data source Data sink channel Lecture 9
193
Linear block codes – cont’d
Standard array For row find a vector in of minimum weight that is not already listed in the array. Call this pattern and form the row as the corresponding coset zero codeword coset coset leaders Lecture 9
194
Linear block codes – cont’d
Standard array and syndrome table decoding Calculate Find the coset leader, , corresponding to . Calculate and the corresponding . Note that If , the error is corrected. If , undetectable decoding error occurs. Lecture 9
195
Linear block codes – cont’d
Example: Standard array for the (6,3) code codewords coset Coset leaders Lecture 9
196
Linear block codes – cont’d
Error pattern Syndrome Lecture 9
197
Hamming codes Hamming codes
Hamming codes are a subclass of linear block codes and belong to the category of perfect codes. Hamming codes are expressed as a function of a single integer The columns of the parity-check matrix, H, consist of all non-zero binary m-tuples. Lecture 9
198
Hamming codes Example: Systematic Hamming code (7,4) Lecture 9
199
Cyclic block codes Cyclic codes are a subclass of linear block codes.
Encoding and syndrome calculation are easily performed using feedback shift-registers. Hence, relatively long block codes can be implemented with a reasonable complexity. BCH and Reed-Solomon codes are cyclic codes. Lecture 9
200
Cyclic block codes A linear (n,k) code is called a Cyclic code if all cyclic shifts of a codeword are also codewords. Example: “i” cyclic shifts of U Lecture 9
201
Cyclic block codes Algebraic structure of Cyclic codes, implies expressing codewords in polynomial form Relationship between a codeword and its cyclic shifts: Hence: By extension Lecture 9
202
Cyclic block codes Basic properties of Cyclic codes:
Let C be a binary (n,k) linear cyclic code Within the set of code polynomials in C, there is a unique monic polynomial with minimal degree is called the generator polynomial. Every code polynomial in C can be expressed uniquely as The generator polynomial is a factor of Lecture 9
203
Cyclic block codes Lecture 9
The orthogonality of G and H in polynomial form is expressed as This means is also a factor of The row , of the generator matrix is formed by the coefficients of the cyclic shift of the generator polynomial. Lecture 9
204
Cyclic block codes Systematic encoding algorithm for an (n,k) Cyclic code: Multiply the message polynomial by Divide the result of Step 1 by the generator polynomial Let be the reminder. Add to to form the codeword Lecture 9
205
Cyclic block codes Example: For the systematic (7,4) Cyclic code with generator polynomial Find the codeword for the message Lecture 9
206
Cyclic block codes Lecture 9
Find the generator and parity check matrices, G and H, respectively. Not in systematic form. We do the following: Lecture 9
207
Cyclic block codes Syndrome decoding for Cyclic codes:
Received codeword in polynomial form is given by The syndrome is the remainder obtained by dividing the received polynomial by the generator polynomial. With syndrome and Standard array, the error is estimated. In Cyclic codes, the size of standard array is considerably reduced. Error pattern Received codeword Syndrome Lecture 9
208
Example of the block codes
8PSK QPSK Lecture 9
209
Convolutional codes Convolutional codes offer an approach to error control coding substantially different from that of block codes. A convolutional encoder: encodes the entire data stream, into a single codeword. does not need to segment the data stream into blocks of fixed size (Convolutional codes are often forced to block structure by periodic truncation). is a machine with memory. This fundamental difference in approach imparts a different nature to the design and evaluation of the code. Block codes are based on algebraic/combinatorial techniques. Convolutional codes are based on construction techniques. Lecture 10
210
Convolutional codes-cont’d
A Convolutional code is specified by three parameters or where is the coding rate, determining the number of data bits per coded bit. In practice, usually k=1 is chosen and we assume that from now on. K is the constraint length of the encoder a where the encoder has K-1 memory elements. There is different definitions in literatures for constraint length. Lecture 10
211
Block diagram of the DCS
Information source Rate 1/n Conv. encoder Modulator Channel Information sink Rate 1/n Conv. decoder Demodulator Lecture 10
212
A Rate ½ Convolutional encoder
Convolutional encoder (rate ½, K=3) 3 shift-registers where the first one takes the incoming data bit and the rest, form the memory of the encoder. Input data bits Output coded bits First coded bit Second coded bit (Branch word) Lecture 10
213
A Rate ½ Convolutional encoder
Message sequence: Time Output Time Output (Branch word) (Branch word) 1 1 1 1 Lecture 10
214
A Rate ½ Convolutional encoder
1 Time Output (Branch word) Encoder Lecture 10
215
Effective code rate Initialize the memory before encoding the first bit (all- zero) Clear out the memory after encoding the last bit (all- zero) Hence, a tail of zero-bits is appended to data bits. Effective code rate : L is the number of data bits and k=1 is assumed: Encoder data tail codeword Lecture 10
216
Encoder representation
Vector representation: We define n binary vector with K elements (one vector for each modulo-2 adder). The i:th element in each vector, is “1” if the i:th stage in the shift register is connected to the corresponding modulo- 2 adder, and “0” otherwise. Example: Lecture 10
217
Encoder representation – cont’d
Impulse response representaiton: The response of encoder to a single “one” bit that goes through it. Example: Branch word Register contents Output Input m Modulo-2 sum: Lecture 10
218
Encoder representation – cont’d
Polynomial representation: We define n generator polynomials, one for each modulo-2 adder. Each polynomial is of degree K-1 or less and describes the connection of the shift registers to the corresponding modulo-2 adder. Example: The output sequence is found as follows: Lecture 10
219
Encoder representation –cont’d
In more details: Lecture 10
220
State diagram A finite-state machine only encounters a finite number of states. State of a machine: the smallest amount of information that, together with a current input to the machine, can predict the output of the machine. In a Convolutional encoder, the state is represented by the content of the memory. Hence, there are states. Lecture 10
221
State diagram – cont’d A state diagram is a way to represent the encoder. A state diagram contains all the states and all possible transitions between them. Only two transitions initiating from a state Only two transitions ending up in a state Lecture 10
222
State diagram – cont’d 10 1 01 11 00 Lecture 10 0/00 1/11 0/11 1/00
output Next state input Current state 10 1 01 11 00 0/00 Output (Branch word) Input 00 1/11 0/11 1/00 10 01 0/10 1/01 11 0/01 1/10 Lecture 10
223
Trellis – cont’d Trellis diagram is an extension of the state diagram that shows the passage of time. Example of a section of trellis for the rate ½ code State 0/00 0/11 0/10 0/01 1/11 1/01 1/00 1/10 Time Lecture 10
224
Trellis –cont’d A trellis diagram for the example code Lecture 10 0/00
0/11 0/10 0/01 1/11 1/01 1/00 0/00 1 11 10 00 Input bits Output bits Tail bits Lecture 10
225
Trellis – cont’d Lecture 10 0/00 0/00 0/11 0/10 0/01 1/11 1/01 1/00
Input bits Tail bits 1 Output bits 11 10 00 0/00 0/00 0/11 0/10 0/01 1/11 1/01 1/00 0/00 0/00 0/00 1/11 1/11 0/11 0/11 0/10 0/10 1/01 0/01 Lecture 10
226
Block diagram of the DCS
Information source Rate 1/n Conv. encoder Modulator Channel Information sink Rate 1/n Conv. decoder Demodulator Lecture 11
227
State diagram A finite-state machine only encounters a finite number of states. State of a machine: the smallest amount of information that, together with a current input to the machine, can predict the output of the machine. In a Convolutional encoder, the state is represented by the content of the memory. Hence, there are states. Lecture 11
228
State diagram – cont’d A state diagram is a way to represent the encoder. A state diagram contains all the states and all possible transitions between them. There can be only two transitions initiating from a state. There can be only two transitions ending up in a state. Lecture 11
229
State diagram – cont’d 10 1 01 11 00 Lecture 11 0/00 1/11 0/11 1/00
output Next state input Current state 10 1 01 11 00 0/00 Output (Branch word) Input 00 1/11 0/11 1/00 10 01 0/10 1/01 11 0/01 1/10 Lecture 11
230
Trellis – cont’d The Trellis diagram is an extension of the state diagram that shows the passage of time. Example of a section of trellis for the rate ½ code State 0/00 0/11 0/10 0/01 1/11 1/01 1/00 1/10 Time Lecture 11
231
Trellis –cont’d A trellis diagram for the example code Lecture 11 0/00
0/11 0/10 0/01 1/11 1/01 1/00 0/00 1 11 10 00 Input bits Output bits Tail bits Lecture 11
232
Trellis – cont’d Lecture 11 0/00 0/00 0/11 0/10 0/01 1/11 1/01 1/00
Input bits Tail bits 1 Output bits 11 10 00 0/00 0/00 0/11 0/10 0/01 1/11 1/01 1/00 0/00 0/00 0/00 1/11 1/11 0/11 0/11 0/10 0/10 1/01 0/01 Lecture 11
233
Optimum decoding If the input sequence messages are equally likely, the optimum decoder which minimizes the probability of error is the Maximum likelihood decoder. The ML decoder, selects a codeword among all the possible codewords which maximizes the likelihood function where is the received sequence and is one of the possible codewords: codewords to search!!! ML decoding rule: Lecture 11
234
ML decoding for memory-less channels
Due to the independent channel statistics for memoryless channels, the likelihood function becomes and equivalently, the log-likelihood function becomes The path metric up to time index , is called the partial path metric. Path metric Branch metric Bit metric ML decoding rule: Choose the path with maximum metric among all the paths in the trellis. This path is the “closest” path to the transmitted sequence. Lecture 11
235
Binary symmetric channels (BSC)
If is the Hamming distance between Z and U, then 1 1 p Modulator input Demodulator output p 1-p Size of coded sequence ML decoding rule: Choose the path with minimum Hamming distance from the received sequence. Lecture 11
236
Inner product or correlation
AWGN channels For BPSK modulation the transmitted sequence corresponding to the codeword is denoted by where and and The log-likelihood function becomes Maximizing the correlation is equivalent to minimizing the Euclidean distance. Inner product or correlation between Z and S ML decoding rule: Choose the path which with minimum Euclidean distance to the received sequence. Lecture 11
237
Soft and hard decisions
In hard decision: The demodulator makes a firm or hard decision whether a one or a zero was transmitted and provides no other information for the decoder such as how reliable the decision is. Hence, its output is only zero or one (the output is quantized only to two level) that are called “hard- bits”. Decoding based on hard-bits is called the “hard-decision decoding”. Lecture 11
238
Soft and hard decision-cont’d
In Soft decision: The demodulator provides the decoder with some side information together with the decision. The side information provides the decoder with a measure of confidence for the decision. The demodulator outputs which are called soft- bits, are quantized to more than two levels. Decoding based on soft-bits, is called the “soft-decision decoding”. On AWGN channels, a 2 dB and on fading channels a 6 dB gain are obtained by using soft-decoding instead of hard-decoding. Lecture 11
239
The Viterbi algorithm The Viterbi algorithm performs Maximum likelihood decoding. It finds a path through the trellis with the largest metric (maximum correlation or minimum distance). It processes the demodulator outputs in an iterative manner. At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the smallest metric, called the survivor, together with its metric. It proceeds in the trellis by eliminating the least likely paths. It reduces the decoding complexity to ! Lecture 11
240
The Viterbi algorithm - cont’d
Do the following set up: For a data block of L bits, form the trellis. The trellis has L+K-1 sections or levels and starts at time and ends up at time Label all the branches in the trellis with their corresponding branch metric. For each state in the trellis at the time which is denoted by , define a parameter Then, do the following: Lecture 11
241
The Viterbi algorithm - cont’d
Set and At time , compute the partial path metrics for all the paths entering each state. Set equal to the best partial path metric entering each state at time . Keep the survivor path and delete the dead paths from the trellis. If , increase by 1 and return to step 2. Start at state zero at time Follow the surviving branches backwards through the trellis. The path found is unique and corresponds to the ML codeword. Lecture 11
242
Example of Hard decision Viterbi decoding
0/00 0/00 0/00 0/00 0/00 1/11 1/11 1/11 0/11 0/11 0/11 0/10 1/00 0/10 0/10 1/01 1/01 0/01 0/01 Lecture 11
243
Example of Hard decision Viterbi decoding-cont’d
Label all the branches with the branch metric (Hamming distance) 2 1 2 1 1 1 1 1 2 1 1 2 2 1 1 Lecture 11
244
Example of Hard decision Viterbi decoding-cont’d
2 2 1 2 1 1 1 1 1 2 1 1 2 2 1 1 Lecture 11
245
Example of Hard decision Viterbi decoding-cont’d
2 3 2 1 2 1 1 1 3 1 1 2 1 1 2 2 1 2 1 Lecture 11
246
Example of Hard decision Viterbi decoding-cont’d
2 3 2 1 2 1 1 1 3 2 1 1 1 2 3 1 2 2 1 2 3 1 Lecture 11
247
Example of Hard decision Viterbi decoding-cont’d
2 3 1 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 Lecture 11
248
Example of Hard decision Viterbi decoding-cont’d
2 3 1 2 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 Lecture 11
249
Example of Hard decision Viterbi decoding-cont’d
Trace back and then: 2 3 1 2 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 Lecture 11
250
Example of soft-decision Viterbi decoding
-5/3 -5/3 10/3 1/3 14/3 -5/3 -1/3 1/3 -1/3 1/3 1/3 5/3 5/3 5/3 8/3 1/3 -1/3 -5/3 1/3 4/3 Partial metric 5/3 3 2 13/3 5/3 -4/3 -5/3 5/3 Branch metric 1/3 10/3 -5/3 Lecture 11
251
Trellis of an example ½ Conv. code
Input bits Tail bits 1 Output bits 11 10 00 1/11 0/00 0/10 1/01 0/11 0/01 1/00 Lecture 12
252
Block diagram of the DCS
Information source Rate 1/n Conv. encoder Modulator sink Conv. decoder Demodulator Channel Lecture 12
253
Soft and hard decision decoding
In hard decision: The demodulator makes a firm or hard decision whether one or zero was transmitted and provides no other information for the decoder such as how reliable the decision is. In Soft decision: The demodulator provides the decoder with some side information together with the decision. The side information provides the decoder with a measure of confidence for the decision. Lecture 12
254
Soft and hard decision decoding …
ML soft-decisions decoding rule: Choose the path in the trellis with minimum Euclidean distance from the received sequence ML hard-decisions decoding rule: Choose the path in the trellis with minimum Hamming distance from the received sequence Lecture 12
255
The Viterbi algorithm The Viterbi algorithm performs Maximum likelihood decoding. It finds a path through trellis with the largest metric (maximum correlation or minimum distance). At each step in the trellis, it compares the partial metric of all paths entering each state, and keeps only the path with the largest metric, called the survivor, together with its metric. Lecture 12
256
Example of hard-decision Viterbi decoding
2 1 3 Branch metric Partial metric Lecture 12
257
Example of soft-decision Viterbi decoding
-5/3 -5/3 10/3 1/3 14/3 -5/3 -1/3 1/3 -1/3 1/3 1/3 5/3 5/3 5/3 8/3 1/3 -1/3 -5/3 1/3 4/3 Partial metric 5/3 3 2 13/3 5/3 -4/3 -5/3 5/3 Branch metric 1/3 10/3 -5/3 Lecture 12
258
Today, we are going to talk about:
The properties of Convolutional codes: Free distance Transfer function Systematic Conv. codes Catastrophic Conv. codes Error performance Interleaving Concatenated codes Error correction scheme in Compact disc Lecture 12
259
Free distance of Convolutional codes
Distance properties: Since a Convolutional encoder generates codewords with various sizes (as opposite to the block codes), the following approach is used to find the minimum distance between all pairs of codewords: Since the code is linear, the minimum distance of the code is the minimum distance between each of the codewords and the all-zero codeword. This is the minimum distance in the set of all arbitrary long paths along the trellis that diverge and re-merge to the all-zero path. It is called the minimum free distance or the free distance of the code, denoted by Lecture 12
260
Free distance … Lecture 12 The path diverging and re-merging to
the all-zero path w. minimum weight Hamming weight of the branch All-zero path 2 1 Lecture 12
261
Transfer function of Convolutional codes
The transfer function of the generating function is a tool which provides information about the weight distribution of the codewords. The weight distribution specifies weights of different paths in the trellis (codewords) with their corresponding lengths and amount of information. Lecture 12
262
Transfer function … Example of transfer function for the rate ½ Convolutional code. Redraw the state diagram such that the zero state is split into two nodes, the starting and ending nodes. Label each branch by the corresponding a = 00 b = 10 c = 01 e = 00 d =11 Lecture 12
263
Transfer function … Write the state equations ( dummy variables)
Solve One path with weight 5, length 3 and data weight of 1 One path with weight 6, length 4 and data weight of 2 One path with weight 5, length 5 and data weight of 2 Lecture 12
264
Systematic Convolutional codes
A Conv. Coder at rate is systematic if the k-input bits appear as part of the n-bits branch word. Systematic codes in general have smaller free distance than non-systematic codes. Input Output Lecture 12
265
Catastrophic Convolutional codes
Catastrophic error propagations in Conv. code: A finite number of errors in the coded bits cause an infinite number of errors in the decoded data bits. A Convolutional code is catastrophic if there is a closed loop in the state diagram with zero weight. Systematic codes are not catastrophic: At least one branch of output word is generated by input bits. Small fraction of non-systematic codes are catastrophic. Lecture 12
266
Catastrophic Conv. … Example of a catastrophic Conv. code: Lecture 12
Assume all-zero codeword is transmitted. Three errors happens on the coded bits such that the decoder takes the wrong path abdd…ddce. This path has 6 ones, no matter how many times stays in the loop at node d. It results in many erroneous decoded data bits. Input Output a 00 b 10 c 01 e d 11 Lecture 12
267
Performance bounds for Conv. codes
Error performance of the Conv. codes is analyzed based on the average bit error probability (not the average codeword error probability), because Codewords have variable sizes due to different sizes of the input. For large blocks, codeword error probability may converge to one bit but the bit error probability may remain constant. …. Lecture 12
268
Performance bounds … Analysis is based on:
Assuming the all-zero codeword is transmitted Evaluating the probability of an “error event” (usually using bounds such as union bound). An “error event” occurs at a time instant in the trellis if a non-zero path leaves the all-zero path and re-merges to it at a later time. Lecture 12
269
Performance bounds … Bounds on bit error probability for memoryless channels: Hard-decision decoding: Soft decision decoding on AWGN channels using BPSK Lecture 12
270
Performance bounds … Error correction capability of Convolutional codes, given by , depends on If the decoding is performed long enough (within 3 to 5 times of the constraint length) How the errors are distributed (bursty or random) For a given code rate, increasing the constraint length, usually increases the free distance. For a given constraint length, decreasing the coding rate, usually increases the free distance. The coding gain is upper bounded Lecture 12
271
Performance bounds … Basic coding gain (dB) for soft-decision Viterbi decoding Lecture 12
272
Interleaving Convolutional codes are suitable for memoryless channels with random error events. Some errors have bursty nature: Statistical dependence among successive error events (time-correlation) due to the channel memory. Like errors in multipath fading channels in wireless communications, errors due to the switching noise, … “Interleaving” makes the channel looks like as a memoryless channel at the decoder. Lecture 12
273
Interleaving … Interleaving is achieved by spreading the coded symbols in time (interleaving) before transmission. The reverse in done at the receiver by deinterleaving the received sequence. “Interleaving” makes bursty errors look like random. Hence, Conv. codes can be used. Types of interleaving: Block interleaving Convolutional or cross interleaving Lecture 12
274
Interleaving … Consider a code with t=1 and 3 coded bits.
A burst error of length 3 can not be corrected. Let us use a block interleaver 3X3 A1 A2 A3 B1 B2 B3 C1 C2 C3 2 errors A1 A2 A3 B1 B2 B3 C1 C2 C3 A1 B1 C1 A2 B2 C2 A3 B3 C3 Interleaver Deinterleaver A1 B1 C1 A2 B2 C2 A3 B3 C3 A1 A2 A3 B1 B2 B3 C1 C2 C3 1 errors 1 errors 1 errors Lecture 12
275
Concatenated codes A concatenated code uses two levels on coding, an inner code and an outer code (higher rate). Popular concatenated codes: Convolutional codes with Viterbi decoding as the inner code and Reed-Solomon codes as the outer code The purpose is to reduce the overall complexity, yet achieving the required error performance. Input data Outer encoder decoder Interleaver Inner encoder decoder Modulate Channel Output data Deinterleaver Demodulate Lecture 12
276
Practical example: Compact disc
The channel in a CD playback system consists of a transmitting laser, a recorded disc and a photo- detector. Sources of errors are manufacturing damages, fingerprints or scratches Errors have bursty like nature. Error correction and concealment is achieved by using a concatenated error control scheme, called cross- interleaver Reed-Solomon code (CIRC). “Without error correcting codes, digital audio would not be technically feasible.” Lecture 12
277
Compact disc – cont’d CIRC encoder and decoder: Lecture 12 Encoder
interleave encode deinterleave decode Encoder Decoder Lecture 12
278
Goals in designing a DCS
Maximizing the transmission bit rate Minimizing probability of bit error Minimizing the required power Minimizing required system bandwidth Maximizing system utilization Minimize system complexity Lecture 13
279
Error probability plane (example for coherent MPSK and MFSK)
bandwidth-efficient power-efficient k=5 k=4 k=1 k=2 Bit error probability k=4 k=3 k=5 k=1,2 Lecture 13
280
Limitations in designing a DCS
The Nyquist theoretical minimum bandwidth requirement The Shannon-Hartley capacity theorem (and the Shannon limit) Government regulations Technological limitations Other system requirements (e.g satellite orbits) Lecture 13
281
Nyquist minimum bandwidth requirement
The theoretical minimum bandwidth needed for baseband transmission of Rs symbols per second is Rs/2 hertz. Lecture 13
282
Shannon limit Channel capacity: The maximum data rate at which error-free communication over the channel is performed. Channel capacity of AWGV channel (Shannon- Hartley capacity theorem): Lecture 13
283
Shannon limit … The Shannon theorem puts a limit on the transmission data rate, not on the error probability: Theoretically possible to transmit information at any rate , with an arbitrary small error probability by using a sufficiently complicated coding scheme For an information rate , it is not possible to find a code that can achieve an arbitrary small error probability. Lecture 13
284
Shannon limit … Lecture 13 C/W [bits/s/Hz] Unattainable region
Practical region SNR [bits/s/Hz] Lecture 13
285
Shannon limit … Lecture 13
There exists a limiting value of below which there can be no error-free communication at any information rate. By increasing the bandwidth alone, the capacity can not be increased to any desired value. Lecture 13
286
Shannon limit … Lecture 13 W/C [Hz/bits/s] Practical region -1.6 [dB]
Unattainable region -1.6 [dB] Lecture 13
287
Bandwidth efficiency plane
R/W [bits/s/Hz] R=C R>C Unattainable region M=256 M=64 Bandwidth limited M=16 M=8 M=4 R<C Practical region M=2 M=4 M=2 M=8 M=16 Shannon limit MPSK MQAM MFSK Power limited Lecture 13
288
Power and bandwidth limited systems
Two major communication resources: Transmit power and channel bandwidth In many communication systems, one of these resources is more precious than the other. Hence, systems can be classified as: Power-limited systems: save power at the expense of bandwidth (for example by using coding schemes) Bandwidth-limited systems: save bandwidth at the expense of power (for example by using spectrally efficient modulation schemes) Lecture 13
289
M-ary signaling Bandwidth efficiency:
Assuming Nyquist (ideal rectangular) filtering at baseband, the required passband bandwidth is: M-PSK and M-QAM (bandwidth-limited systems) Bandwidth efficiency increases as M increases. MFSK (power-limited systems) Bandwidth efficiency decreases as M increases. Lecture 13
290
Design example of uncoded systems
Design goals: The bit error probability at the modulator output must meet the system error requirement. The transmission bandwidth must not exceed the available channel bandwidth. Input M-ary modulator Output M-ary demodulator Lecture 13
291
Design example of uncoded systems …
Choose a modulation scheme that meets the following system requirements: Lecture 13
292
Design example of uncoded systems …
Choose a modulation scheme that meets the following system requirements: Lecture 13
293
Design example of coded systems
Design goals: The bit error probability at the decoder output must meet the system error requirement. The rate of the code must not expand the required transmission bandwidth beyond the available channel bandwidth. The code should be as simple as possible. Generally, the shorter the code, the simpler will be its implementation. Input Encoder M-ary modulator Output Decoder M-ary demodulator Lecture 13
294
Design example of coded systems …
Choose a modulation/coding scheme that meets the following system requirements: The requirements are similar to the bandwidth-limited uncoded system, except that the target bit error probability is much lower. Lecture 13
295
Design example of coded systems
Using 8-PSK, satisfies the bandwidth constraint, but not the bit error probability constraint. Much higher power is required for uncoded 8-PSK. The solution is to use channel coding (block codes or convolutional codes) to save the power at the expense of bandwidth while meeting the target bit error probability. Lecture 13
296
Design example of coded systems
For simplicity, we use BCH codes. The required coding gain is: The maximum allowed bandwidth expansion due to coding is: The current bandwidth of uncoded 8-PSK can be expanded by still 25% to remain below the channel bandwidth. Among the BCH codes, we choose the one which provides the required coding gain and bandwidth expansion with minimum amount of redundancy. Lecture 13
297
Design example of coded systems …
Bandwidth compatible BCH codes Coding gain in dB with MPSK Lecture 13
298
Design example of coded systems …
Examine that the combination of 8-PSK and (63,51) BCH codes meets the requirements: Lecture 13
299
Effects of error-correcting codes on error performance
Error-correcting codes at fixed SNR influence the error performance in two ways: Improving effect: The larger the redundancy, the greater the error- correction capability Degrading effect: Energy reduction per channel symbol or coded bits for real-time applications due to faster signaling. The degrading effect vanishes for non-real time applications when delay is tolerable, since the channel symbol energy is not reduced. Lecture 13
300
Bandwidth efficient modulation schemes
Offset QPSK (OQPSK) and Minimum shift keying Bandwidth efficient and constant envelope modulations, suitable for non-linear amplifier M-QAM Bandwidth efficient modulation Trellis coded modulation (TCM) Bandwidth efficient modulation which improves the performance without bandwidth expansion Lecture 13
301
Course summary In a big picture, we studied:
Fundamentals issues in designing a digital communication system (DSC) Basic techniques: formatting, coding, modulation Design goals: Probability of error and delay constraints Trade-off between parameters: Bandwidth and power limited systems Trading power with bandwidth and vise versa Lecture 13
302
Block diagram of a DCS Source encode Channel encode Pulse modulate
Bandpass modulate Format Digital modulation Channel Digital demodulation Source decode Channel decode Demod. Sample Format Detect Lecture 13
303
Course summary – cont’d
In details, we studies: Basic definitions and concepts Signals classification and linear systems Random processes and their statistics WSS, cyclostationary and ergodic processes Autocorrelation and power spectral density Power and energy spectral density Noise in communication systems (AWGN) Bandwidth of signal Formatting Continuous sources Nyquist sampling theorem and aliasing Uniform and non-uniform quantization Lecture 13
304
Course summary – cont’d
Channel coding Linear block codes (cyclic codes and Hamming codes) Encoding and decoding structure Generator and parity-check matrices (or polynomials), syndrome, standard array Codes properties: Linear property of the code, Hamming distance, minimum distance, error-correction capability, coding gain, bandwidth expansion due to redundant bits, systematic codes Lecture 13
305
Course summary – cont’d
Convolutional codes Encoder and decoder structure Encoder as a finite state machine, state diagram, trellis, transfer function Minimum free distance, catastrophic codes, systematic codes Maximum likelihood decoding: Viterbi decoding algorithm with soft and hard decisions Coding gain, Hamming distance, Euclidean distance, affects of free distance, code rate and encoder memory on the performance (probability of error and bandwidth) Lecture 13
306
Course summary – cont’d
Modulation Baseband modulation Signal space, Euclidean distance Orthogonal basic function Matched filter to reduce ISI Equalization to reduce channel induced ISI Pulse shaping to reduce ISI due to filtering at the transmitter and receiver Minimum Nyquist bandwidth, ideal Nyquist pulse shapes, raise cosine pulse shape Lecture 13
307
Course summary – cont’d
Baseband detection Structure of optimum receiver Optimum receiver structure Optimum detection (MAP) Maximum likelihood detection for equally likely symbols Average bit error probability Union bound on error probability Upper bound on error probability based on minimum distance Lecture 13
308
Course summary – cont’d
Passband modulation Modulation schemes One dimensional waveforms (ASK, M-PAM) Two dimensional waveforms (M-PSK, M-QAM) Multidimensional waveforms (M-FSK) Coherent and non-coherent detection Average symbol and bit error probabilities Average symbol energy, symbol rate, bandwidth Comparison of modulation schemes in terms of error performance and bandwidth occupation (power and bandwidth) Lecture 13
309
Course summary – cont’d
Trade-off between modulation and coding Channel models Discrete inputs, discrete outputs Memoryless channels : BSC Channels with memory Discrete input, continuous output AWGN channels Shannon limits for information transmission rate Comparison between different modulation and coding schemes Probability of error, required bandwidth, delay Trade-offs between power and bandwidth Uncoded and coded systems Lecture 13
310
Information about the exam:
Exam date: 8th of March 2008 (Saturday) Allowed material: Any calculator (no computers) Mathematics handbook Swedish-English dictionary A list of formulae that will be available with the exam. Lecture 13
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.