Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 12 Equalization Lecture 12.

Similar presentations

Presentation on theme: "Lecture 12 Equalization Lecture 12."— Presentation transcript:

1 Lecture 12 Equalization Lecture 12

2 Intersymbol Interference
With any practical channel the inevitable filtering effect will cause a spreading (or smearing out) of individual data symbols passing through a channel. Lecture 12

3 For consecutive symbols this spreading causes part of the symbol energy to overlap with neighbouring symbols causing intersymbol interference (ISI). Lecture 12

4 ISI can significantly degrade the ability of the data detector to differentiate a current symbol from the diffused energy of the adjacent symbols. With no noise present in the channel this leads to the detection of errors known as the irreducible error rate. It will degrade the bit and symbol error rate performance in the presence of noise Lecture 12

5 Pulse Shape for Zero ISI
Careful choice of the overall channel characteristic makes it possible to control the intersymbol interference such that it does not degrade the bit error rate performance of the link. Achieved by ensuring the overall channel filter transfer function has what is termed a Nyquist frequency response. Lecture 12

6 Nyquist Channel Response
transfer function has a transition band between passband and stopband that is symmetrical about a frequency equal to 0.5 x 1/ Ts. Lecture 12

7 For such a channel the the data signals are still smeared but the waveform passes through zero at multiples of the symbol period. Lecture 12

8 Inaccuracy in symbol timing is referred to as timing jitter.
If we sample the symbol stream at the precise point where the ISI goes through zero, spread energy from adjacent symbols will not affect the value of the current symbol at that point. This demands accurate sample point timing - a major challenge in modem / data receiver design. Inaccuracy in symbol timing is referred to as timing jitter. Lecture 12

9 Achieving the Nyquist Channel
Very unlikely that a communications channel will inherently exhibit a Nyquist transfer response. Modern systems use adaptive channel equalisers to flatten the channel transfer function. Adaptive equalisers use a training sequence. Lecture 12

10 Eye Diagrams Visual method of diagnosing problems with data systems.
Generated using a conventional oscilloscope connected to the demodulated filtered symbol stream. Oscilloscope is re-triggered at every symbol period or multiple of symbol periods using a timing recovery signal. Lecture 12

11 Lecture 12

12 Example eye diagrams for different distortions, each has a distinctive effect on the appearance of the ‘eye opening’: Lecture 12

13 Example of complex ‘eye’ for M-ary signalling:
Lecture 12

14 Raised Cosine Filtering
Commonly used realisation of a Nyquist filter. The transition band (zone between pass- and stopband) is shaped like a cosine wave. Lecture 12

15 When β = 0 this conforms to the ideal brick-wall filter.
The sharpness of the filter is controlled by the parameter β, the filter roll-off factor. When β = 0 this conforms to the ideal brick-wall filter. The bandwidth B occupied by a raised cosine filtered data signal is thus increased from its minimum value, Bmin = 0.5 x 1/Ts, to: Actual bandwidth B = Bmin ( 1 + β) Lecture 12

16 Impulse Response of Filter
The impulse response of the raised cosine filter is a measure of its spreading effect. The amount of ‘ringing’ depends on the choice of β. The smaller the value of a (nearer to a ‘brick wall’ filter), the more pronounced the ringing. Lecture 12

17 Choice of Filter Roll-off β
Benefits of small β maximum bandwidth efficiency is achieved Benefits of large β simpler filter with fewer stages hence easier to implement less signal overshoot less sensitivity to symbol timing accuracy Lecture 12

18 Symbol Timing Recovery
Most symbol timing recovery systems obtain their information from the incoming message data using ‘zero crossing’ information in the baseband signal. Lecture 12

19 Three kinds of systems Narrowband system:
Flat fading channel, single-tap channel model. System bandwidth Tm = delay spread of multipath channel bit or symbol T = bit or symbol duration Lecture 12

20 No intersymbol interference (ISI)
Narrowband system: Adjacent symbols (bits) do not affect the decision process (in other words there is no intersymbol interference). Received replicas of same symbol overlap in multipath channel No intersymbol interference at decision time instant However: Fading (destructive interference) is still possible Lecture 12

21 Noisy and distorted symbols
Decision circuit In the binary case, the decision circuit compares the received signal with a threshold at specific time instants (usually somewhere in the middle of each bit period): Noisy and distorted symbols “Clean” symbols Decision circuit Decision threshold Decision time instant Lecture 12

22 Three kinds of systems, Cont.
Wideband system (TDM, TDMA): Selective fading channel, transversal filter channel model, good performance possible through adaptive equalization Intersymbol interference causes signal distortion Tm = delay spread of multipath channel T = bit or symbol duration Lecture 12

23 Symbols with ISI removed
Receiver structure The intersymbol interference of received symbols (bits) must be removed before decision making (the case is illustrated below for a binary signal, where symbol = bit): Symbols with ISI Symbols with ISI removed “Clean” symbols Adaptive equalizer Decision circuit Decision threshold Decision time instant Lecture 12

24 Three kinds of systems: BER performance
Frequency-selective channel (with equalization) BER Frequency-selective channel (no equalization) “BER floor” AWGN channel (no fading) Flat fading channel S/N Lecture 12

25 Three kinds of systems, Cont.
Wideband system (DS-CDMA): Selective fading channel, transversal filter channel model, good performance possible through use of Rake receiver (this lecture). Tm = delay spread of multipath channel Tc = Chip duration ... Bit (or symbol) T = bit (or symbol) duration Lecture 12

26 Three basic equalization methods
1)- Linear equalization (LE): Performance is not very good when the frequency response of the frequency selective channel contains deep fades. Zero-forcing algorithm aims to eliminate the intersymbol interference (ISI) at decision time instants (i.e. at the centre of the bit/symbol interval). Least-mean-square (LMS) algorithm will be investigated in greater detail in this presentation. Recursive least-squares (RLS) algorithm offers faster convergence, but is computationally more complex than LMS (since matrix inversion is required). Lecture 12

27 Three basic equalization methods
2)-Decision feedback equalization (DFE): Performance better than LE, due to ISI cancellation of tails of previously received symbols. Decision feedback equalizer structure: Feed-back filter (FBF) Input Output Feed-forward filter (FFF) + + Symbol decision Adjustment of filter coefficients Lecture 12

28 Three basic equalization methods, Cont.
3)- Maximum Likelihood Sequence Estimation using the Viterbi Algorithm (MLSE-VA): Best performance. Operation of the Viterbi algorithm can be visualized by means of a trellis diagram with m K-1 states, where m is the symbol alphabet size and K is the length of the overall channel impulse response (in samples). State trellis diagram Allowed transition between states State Sample time instants Lecture 12

29 Linear equalization, zero-forcing algorithm
Basic idea: Raised cosine spectrum Transmitted symbol spectrum Channel frequency response (incl. T & R filters) Equalizer frequency response = f fs = 1/T Lecture 12

30 Zero-forcing equalizer
Transmitted impulse sequence Communication channel Equalizer Input to decision circuit FIR filter contains 2N+1 coefficients FIR filter contains 2M+1 coefficients Overall channel Channel impulse response Equalizer impulse response Coefficients of equivalent FIR filter (in fact the equivalent FIR filter consists of 2M+1+2N coefficients, but the equalizer can only “handle” 2M+1 equations) Lecture 12

31 Zero-forcing equalizer
We want overall filter response to be non-zero at decision time k = 0 and zero at all other sampling times k  0 : (k = –M) This leads to a set of 2M+1 equations: (k = 0) (k = M) Lecture 12

32 Equalization: Removing Residual ISI
Consider a tapped delay line equalizer with Search for the tap gains cN such that the output equals zero at sample intervals D except at the decision instant when it should be unity. The output is (think for instance paths c-N, cN or c0) that is sampled at yielding Lecture 12

33 Example of Equalization
Read the distorted pulse values into matrix from fig. (a) and the solution is Lecture 12 Zero forced values

34 Minimum Mean Square Error (MMSE)
The aim is to minimize: (or depending on the source) Input to decision circuit Estimate of k:th symbol Error + Channel Equalizer Lecture 12

35 MSE vs. equalizer coefficients
quadratic multi-dimensional function of equalizer coefficient values Illustration of case for two real-valued equalizer coefficients (or one complex-valued coefficient) MMSE aim: find minimum value directly (Wiener solution), or use an algorithm that recursively changes the equalizer coefficients in the correct direction (towards the minimum value of J)! Lecture 12

36 Wiener solution We start with the Wiener-Hopf equations in matrix form: R = correlation matrix (M x M) of received (sampled) signal values p = vector (of length M) indicating cross-correlation between received signal values and estimate of received symbol copt = vector (of length M) consisting of the optimal equalizer coefficient values (We assume here that the equalizer contains M taps, not 2M+1 taps like in other parts of this presentation) Lecture 12

37 Correlation matrix R & vector p
where M samples Before we can perform the stochastical expectation operation, we must know the stochastical properties of the transmitted signal (and of the channel if it is changing). Usually we do not have this information => some non-stochastical algorithm like Least-mean-square (LMS) must be used. Lecture 12

38 Inverting a large matrix is difficult!
Algorithms Stochastical information (R and p) is available: 1. Direct solution of the Wiener-Hopf equations: 2. Newton’s algorithm (fast iterative algorithm) 3. Method of steepest descent (this iterative algorithm is slow but easier to implement) Inverting a large matrix is difficult! R and p are not available: Use an algorithm that is based on the received signal sequence directly. One such algorithm is Least-Mean-Square (LMS). Lecture 12

39 Conventional linear equalizer of LMS type
Widrow Received complex signal samples Transversal FIR filter with 2M+1 filter taps LMS algorithm for adjustment of tap coefficients T T T + Complex-valued tap coefficients of equalizer filter Estimate of kth symbol after symbol decision Lecture 12

40 Joint optimization of coefficients and phase
Equalizer filter Coefficient updating Phase synchronization + Minimize: Lecture 12

41 Least-mean-square (LMS) algorithm
(derived from “method of steepest descent”) for convergence towards minimum mean square error (MMSE) Real part of n:th coefficient: Imaginary part of n:th coefficient: Phase: Iteration index Step size of iteration equations Lecture 12

42 LMS algorithm (cont.) After some calculation, the recursion equations are obtained in the form Lecture 12

43 Effect of iteration step size
smaller larger  Slow acquisition Poor stability Poor tracking performance Large variation around optimum value Lecture 12

44 Decision feedback equalizer
T T ? FBF + + T T T LMS algorithm for tap coefficient adjustment FFF Lecture 12

45 Decision feedback equalizer (cont.)
The purpose is again to minimize where Feedforward filter (FFF) is similar to filter in linear equalizer tap spacing smaller than symbol interval is allowed => fractionally spaced equalizer => oversampling by a factor of 2 or 4 is common Feedback filter (FBF) is used for either reducing or canceling samples of previous symbols at decision time instants Tap spacing must be equal to symbol interval Lecture 12

46 Decision feedback equalizer (cont.)
The coefficients of the feedback filter (FBF) can be obtained in either of two ways: Recursively (using the LMS algorithm) in a similar fashion as FFF coefficients By calculation from FFF coefficients and channel coefficients (we achieve exact ISI cancellation in this way, but channel estimation is necessary): Lecture 12

47 Lecture 12

48 Channel estimation circuit
Estimated symbols Filter length = CIR length T T T LMS algorithm + k:th sample of received signal Estimated channel coefficients Lecture 12

49 Channel estimation circuit (cont.)
1. Acquisition phase Uses “training sequence” Symbols are known at receiver, 2. Tracking phase Uses estimated symbols (decision directed mode) Symbol estimates are obtained from the decision circuit (note the delay in the feedback loop!) Since the estimation circuit is adaptive, time-varying channel coefficients can be tracked to some extent. Alternatively: blind estimation (no training sequence) Lecture 12

50 Channel estimation circuit in receiver
Mandatory for MLSE-VA, optional for DFE Symbol estimates (with errors) Training symbols (no errors) Estimated channel coefficients Channel estimation circuit Equalizer & decision circuit Received signal samples “Clean” output symbols Lecture 12

51 MLSE-VA receiver structure
Matched filter NW filter MLSE (VA) Channel estimation circuit MLSE-VA circuit causes delay of estimated symbol sequence before it is available for channel estimation => channel estimates may be out-of-date (in a fast time-varying channel) Lecture 12

52 MLSE-VA receiver structure (cont.)
The probability of receiving sample sequence y (note: vector form) of length N, conditioned on a certain symbol sequence estimate and overall channel estimate: Since we have AWGN Length of f (k) Objective: find symbol sequence that maximizes this probability This is allowed since noise samples are uncorrelated due to NW (= noise whitening) filter Metric to be minimized (select best .. using VA) Lecture 12

53 MLSE-VA receiver structure (cont.)
We want to choose that symbol sequence estimate and overall channel estimate which maximizes the conditional probability. Since product of exponentials <=> sum of exponents, the metric to be minimized is a sum expression. If the length of the overall channel impulse response in samples (or channel coefficients) is K, in other words the time span of the channel is (K-1)T, the next step is to construct a state trellis where a state is defined as a certain combination of K-1 previous symbols causing ISI on the k:th symbol. Note: this is overall CIR, including response of matched filter and NW filter K-1 k Lecture 12

54 MLSE-VA receiver structure (cont.)
At adjacent time instants, the symbol sequences causing ISI are correlated. As an example (m=2, K=5): : At time k-3 1 1 At time k-2 1 1 At time k-1 1 1 1 At time k 1 1 1 1 : 16 states Bit detected at time instant Bits causing ISI not causing ISI at time instant Lecture 12

55 MLSE-VA receiver structure (cont.)
State trellis diagram Number of states The ”best” state sequence is estimated by means of Viterbi algorithm (VA) Alphabet size k-3 k-2 k-1 k k+1 Of the transitions terminating in a certain state at a certain time instant, the VA selects the transition associated with highest accumulated probability (up to that time instant) for further processing. Lecture 12

56 Rake receiver structure and operation
Rake receiver <=> a signal processing example that illustrates some important concepts Rake receiver is used in DS-CDMA (Direct Sequence Code Division Multiple Access) systems Rake “fingers” synchronize to signal components that are received via a wideband multipath channel Important task of Rake receiver is channel estimation Output signals from Rake fingers are combined, for instance calculation of Maximum Ratio Combining (MRC) Lecture 12

57 Principle of RAKE Receiver
Lecture 12

58 To start with: multipath channel
Suppose a signal s (t) is transmitted. A multipath channel with M physical paths can be presented (in equivalent low-pass signal domain) in form of its Channel Impulse Response (CIR) in which case the received (equivalent low-pass) signal is of the form . Lecture 12

59 Sampled channel impulse response
Sampled Channel Impulse Response (CIR) The CIR can also be presented in sampled form using N complex-valued samples uniformly spaced at most 1/W apart, where W is the RF system bandwidth: Uniformly spaced channel samples Delay ( ) CIR sampling rate = for instance sampling rate used in receiver during A/D conversion. Lecture 12

60 Rake finger selection Channel estimation circuit of Rake receiver selects L strongest samples (components) to be processed in L Rake fingers: Only one sample chosen, since adjacent samples may be correlated Only these samples are constructively utilized in Rake fingers Delay ( ) In the Rake receiver example to follow, we assume L = 3. Lecture 12

61 Received multipath signal
Received signal consists of a sum of delayed (and weighted) replicas of transmitted signal. Blue samples indicate signal replicas processed in Rake fingers Green samples only cause interference All signal replicas are contained in received signal Signal replicas: same signal at different delays, with different amplitudes and phases Summation in channel <=> “smeared” end result : Lecture 12

62 (Generic structure, assuming 3 fingers)
Rake receiver (Generic structure, assuming 3 fingers) Received baseband multipath signal Channel estimation Weighting Finger 1 Output signal (to decision circuit) Finger 2 Finger 3 Rake receiver Path combining Lecture 12

63 Channel estimation Channel estimation A C B A B C
Amplitude, phase and delay of signal components detected in Rake fingers must be estimated. B Each Rake finger requires delay (and often also phase) information of the signal component it is processing. C Maximum Ratio Combining (MRC) requires amplitude (and phase if this is utilized in Rake fingers) of components processed in Rake fingers. Lecture 12

64 Rake finger processing
Case 1: same code in I and Q branches - for purpose of easy demonstration only - no phase synchronization in Rake fingers Case 2: different codes in I and Q branches - the real case e.g. in IS-95 and WCDMA - phase synchronization in Rake fingers Lecture 12

65  Rake finger processing (Case 1: same code in I and Q branches)
Received signal Stored code sequence I branch To MRC Delay I/Q Q branch Output of finger: a complex signal value for each detected bit Lecture 12

66 Correlation vs. matched filtering
Basic idea of correlation: Stored code sequence Received code sequence Same end result (in theory) Same result through matched filtering and sampling: Received code sequence Matched filter Sampling at t = T Lecture 12

67 Rake finger processing
Correlation with stored code sequence has different impact on different parts of the received signal = desired signal component detected in i:th Rake finger = other signal components causing interference = other codes causing interference (+ noise ... ) Lecture 12

68 Rake finger processing
Illustration of correlation (in one quadrature branch) with desired signal component (i.e. correctly aligned code sequence) “1” bit “0” bit “0” bit Desired component Stored sequence After multiplication Strong positive/negative “correlation result” after integration Lecture 12

69 Rake finger processing
Illustration of correlation (in one quadrature branch) with some other signal component (i.e. non-aligned code sequence) “1” bit “0” bit Other component Stored sequence After multiplication Weak “correlation result” after integration Lecture 12

70 Rake finger processing
Mathematically: Correlation result for bit between Desired signal Interference from same signal Interference from other signals Lecture 12

71 Rake finger processing
Set of codes must have both: - good autocorrelation properties (same code sequence) - good cross-correlation properties (different sequences) Large Small Small Lecture 12

72 Rake finger processing
(Case 2: different codes in I and Q branches) Stored I code sequence Received signal To MRC for I signal Delay I/Q I branch To MRC for Q signal Q branch Required: phase synchronization Stored Q code sequence Lecture 12

73 Phase synchronization
When different codes are used in the quadrature branches (as in practical systems such as IS-95 or WCDMA), phase synchronization is necessary. I/Q Phase synchronization is based on information within received signal (pilot signal or pilot channel). Note: phase synchronization must be done for each finger separately! Pilot signal Q Signal in Q-branch I Signal in I-branch Lecture 12

74 Weighting (Case 1: same code in I and Q branches)
Maximum Ratio Combining (MRC) means weighting each Rake finger output with a complex number after which the weighted components are summed “on the real axis”: Component is weighted Phase is aligned Rake finger output is complex-valued Instead of phase alignment: take absolute value of finger outputs ... real-valued Lecture 12

75 Phasors representing complex-valued Rake finger outputs
Phase alignment The complex-valued Rake finger outputs are phase-aligned using the following simple operation: Before phase alignment: Phasors representing complex-valued Rake finger outputs After phase alignment: Lecture 12

76 Maximum Ratio Combining
(Case 1: same code in I and Q branches) The signal value after Maximum Ratio Combining is: The idea of MRC: strong signal components are given more weight than weak signal components. Lecture 12

77 Maximum Ratio Combining
(Case 2: different codes in I and Q branches) Output signals from the Rake fingers are already phase aligned (this is a benefit of finger-wise phase synchronization). Consequently, I and Q outputs are fed via separate MRC circuits to the decision circuit (e.g. QPSK demodulator). MRC I Finger 1 I Q Quaternary decision circuit I Finger 2 Q MRC Q : Lecture 12

Download ppt "Lecture 12 Equalization Lecture 12."

Similar presentations

Ads by Google