Presentation on theme: "Lecture 121 Equalization. Lecture 122 Intersymbol Interference –With any practical channel the inevitable filtering effect will cause a spreading (or."— Presentation transcript:
Lecture 121 Equalization
Lecture 122 Intersymbol Interference –With any practical channel the inevitable filtering effect will cause a spreading (or smearing out) of individual data symbols passing through a channel.
Lecture 123 For consecutive symbols this spreading causes part of the symbol energy to overlap with neighbouring symbols causing intersymbol interference (ISI).
Lecture 124 ISI can significantly degrade the ability of the data detector to differentiate a current symbol from the diffused energy of the adjacent symbols. With no noise present in the channel this leads to the detection of errors known as the irreducible error rate. It will degrade the bit and symbol error rate performance in the presence of noise
Lecture 125 Pulse Shape for Zero ISI Careful choice of the overall channel characteristic makes it possible to control the intersymbol interference such that it does not degrade the bit error rate performance of the link. Achieved by ensuring the overall channel filter transfer function has what is termed a Nyquist frequency response.
Lecture 126 Nyquist Channel Response transfer function has a transition band between passband and stopband that is symmetrical about a frequency equal to 0.5 x 1/ Ts.
Lecture 127 For such a channel the the data signals are still smeared but the waveform passes through zero at multiples of the symbol period.
Lecture 128 If we sample the symbol stream at the precise point where the ISI goes through zero, spread energy from adjacent symbols will not affect the value of the current symbol at that point. This demands accurate sample point timing - a major challenge in modem / data receiver design. Inaccuracy in symbol timing is referred to as timing jitter.
Lecture 129 Achieving the Nyquist Channel Very unlikely that a communications channel will inherently exhibit a Nyquist transfer response. Modern systems use adaptive channel equalisers to flatten the channel transfer function. Adaptive equalisers use a training sequence.
Lecture 1210 Eye Diagrams Visual method of diagnosing problems with data systems. Generated using a conventional oscilloscope connected to the demodulated filtered symbol stream. Oscilloscope is re-triggered at every symbol period or multiple of symbol periods using a timing recovery signal.
Lecture 1212 Example eye diagrams for different distortions, each has a distinctive effect on the appearance of the eye opening:
Lecture 1213 –Example of complex eye for M-ary signalling:
Lecture 1214 Raised Cosine Filtering Commonly used realisation of a Nyquist filter. The transition band (zone between pass- and stopband) is shaped like a cosine wave.
Lecture 1215 The sharpness of the filter is controlled by the parameter β, the filter roll-off factor. When β = 0 this conforms to the ideal brick- wall filter. The bandwidth B occupied by a raised cosine filtered data signal is thus increased from its minimum value, B min = 0.5 x 1/Ts, to: Actual bandwidth B = Bmin ( 1 + β)
Lecture 1216 Impulse Response of Filter –The impulse response of the raised cosine filter is a measure of its spreading effect. –The amount of ringing depends on the choice of β. –The smaller the value of a (nearer to a brick wall filter), the more pronounced the ringing.
Lecture 1217 Choice of Filter Roll-off β Benefits of small β –maximum bandwidth efficiency is achieved Benefits of large β –simpler filter with fewer stages hence easier to implement –less signal overshoot –less sensitivity to symbol timing accuracy
Lecture 1218 Symbol Timing Recovery –Most symbol timing recovery systems obtain their information from the incoming message data using zero crossing information in the baseband signal.
Lecture 1219 Three kinds of systems Narrowband system: Flat fading channel, single-tap channel model. bit or symbol T m = delay spread of multipath channel T = bit or symbol duration System bandwidth
Lecture 1220 No intersymbol interference (ISI) Adjacent symbols (bits) do not affect the decision process (in other words there is no intersymbol interference). However: Fading (destructive interference) is still possible No intersymbol interference at decision time instant Received replicas of same symbol overlap in multipath channel Narrowband system:
Lecture 1221 Decision circuit In the binary case, the decision circuit compares the received signal with a threshold at specific time instants (usually somewhere in the middle of each bit period): Decision time instant Decision circuit Decision threshold Noisy and distorted symbolsClean symbols
Lecture 1222 Three kinds of systems, Cont. Wideband system (TDM, TDMA): Selective fading channel, transversal filter channel model, good performance possible through adaptive equalization T = bit or symbol duration Intersymbol interference causes signal distortion T m = delay spread of multipath channel
Lecture 1223 Receiver structure The intersymbol interference of received symbols (bits) must be removed before decision making (the case is illustrated below for a binary signal, where symbol = bit): Decision circuit Decision circuit Adaptive equalizer Adaptive equalizer Symbols with ISI Symbols with ISI removed Clean symbols Decision time instant Decision threshold
Lecture 1224 Three kinds of systems: BER performance S/N BER Frequency-selective channel (no equalization) Flat fading channel AWGN channel (no fading) Frequency-selective channel (with equalization) BER floor
Lecture 1225 Three kinds of systems, Cont. Wideband system (DS-CDMA): Selective fading channel, transversal filter channel model, good performance possible through use of Rake receiver (this lecture). Bit (or symbol) T m = delay spread of multipath channel T = bit (or symbol) duration T c = Chip duration...
Lecture 1226 Three basic equalization methods 1)- Linear equalization (LE): Performance is not very good when the frequency response of the frequency selective channel contains deep fades. Zero-forcing algorithm aims to eliminate the intersymbol interference (ISI) at decision time instants (i.e. at the centre of the bit/symbol interval). Least-mean-square (LMS) algorithm will be investigated in greater detail in this presentation. Recursive least-squares (RLS) algorithm offers faster convergence, but is computationally more complex than LMS (since matrix inversion is required).
Lecture 1227 Three basic equalization methods 2)-Decision feedback equalization (DFE): Performance better than LE, due to ISI cancellation of tails of previously received symbols. Decision feedback equalizer structure: Feed-forward filter (FFF) Feed-back filter (FBF) Adjustment of filter coefficients InputOutput + + Symbol decision
Lecture 1228 Three basic equalization methods, Cont. 3)- Maximum Likelihood Sequence Estimation using the Viterbi Algorithm (MLSE-VA): Best performance. Operation of the Viterbi algorithm can be visualized by means of a trellis diagram with m K-1 states, where m is the symbol alphabet size and K is the length of the overall channel impulse response (in samples). State trellis diagram Sample time instants State Allowed transition between states
Lecture 1229 Linear equalization, zero-forcing algorithm Raised cosine spectrum Transmitted symbol spectrum Channel frequency response (incl. T & R filters) Channel frequency response (incl. T & R filters) Equalizer frequency response = 0ff s = 1/T Basic idea:
Lecture 1230 Zero-forcing equalizer Communication channel Equalizer FIR filter contains 2N+1 coefficients Transmitted impulse sequence Input to decision circuit Channel impulse responseEqualizer impulse response Coefficients of equivalent FIR filter (in fact the equivalent FIR filter consists of 2M+1+2N coefficients, but the equalizer can only handle 2M+1 equations) FIR filter contains 2M+1 coefficients Overall channel
Lecture 1231 Zero-forcing equalizer We want overall filter response to be non-zero at decision time k = 0 and zero at all other sampling times k 0 : This leads to a set of 2M+1 equations: (k = –M) (k = 0) (k = M)
32 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Lecture 12 Equalization: Removing Residual ISI n Consider a tapped delay line equalizer with n Search for the tap gains c N such that the output equals zero at sample intervals D except at the decision instant when it should be unity. The output is (think for instance paths c -N, c N or c 0 ) that is sampled at yielding
33 Helsinki University of Technology,Communications Laboratory, Timo O. Korhonen Lecture 12 Example of Equalization n Read the distorted pulse values into matrix from fig. (a) and the solution is Zero forced values
Lecture 1234 Minimum Mean Square Error (MMSE) The aim is to minimize: (or depending on the source) Equalizer Channel + Estimate of k:th symbol Input to decision circuit Error
Lecture 1235 MSE vs. equalizer coefficients quadratic multi-dimensional function of equalizer coefficient values MMSE aim: find minimum value directly (Wiener solution), or use an algorithm that recursively changes the equalizer coefficients in the correct direction (towards the minimum value of J )! Illustration of case for two real-valued equalizer coefficients (or one complex- valued coefficient)
Lecture 1236 Wiener solution R = correlation matrix (M x M) of received (sampled) signal values p = vector (of length M) indicating cross-correlation between received signal values and estimate of received symbol c opt = vector (of length M) consisting of the optimal equalizer coefficient values (We assume here that the equalizer contains M taps, not 2M+1 taps like in other parts of this presentation) We start with the Wiener-Hopf equations in matrix form:
Lecture 1237 Correlation matrix R & vector p Before we can perform the stochastical expectation operation, we must know the stochastical properties of the transmitted signal (and of the channel if it is changing). Usually we do not have this information => some non-stochastical algorithm like Least-mean-square (LMS) must be used. where M samples
Lecture 1238 Algorithms Stochastical information (R and p) is available: 1. Direct solution of the Wiener-Hopf equations: 2. Newtons algorithm (fast iterative algorithm) 3. Method of steepest descent (this iterative algorithm is slow but easier to implement) R and p are not available: Use an algorithm that is based on the received signal sequence directly. One such algorithm is Least-Mean-Square (LMS). Inverting a large matrix is difficult!
Lecture 1239 Conventional linear equalizer of LMS type T T T T T T LMS algorithm for adjustment of tap coefficients Transversal FIR filter with 2M+1 filter taps Estimate of kth symbol after symbol decision Complex-valued tap coefficients of equalizer filter + Widrow Received complex signal samples
Lecture 1240 Joint optimization of coefficients and phase Equalizer filter Coefficient updating Phase synchronization + Minimize:
Lecture 1241 Least-mean-square (LMS) algorithm (derived from method of steepest descent) for convergence towards minimum mean square error (MMSE) Real part of n:th coefficient: Imaginary part of n:th coefficient: Phase: equations Iteration indexStep size of iteration
Lecture 1242 LMS algorithm (cont.) After some calculation, the recursion equations are obtained in the form
Lecture 1243 Effect of iteration step size Slow acquisition smallerlarger Poor tracking performance Poor stability Large variation around optimum value
Lecture 1244 Decision feedback equalizer T T T T T T LMS algorithm for tap coefficient adjustment T T T T FFF FBF + + ?
Lecture 1245 The purpose is again to minimize Decision feedback equalizer (cont.) Feedforward filter (FFF) is similar to filter in linear equalizer tap spacing smaller than symbol interval is allowed => fractionally spaced equalizer => oversampling by a factor of 2 or 4 is common Feedback filter (FBF) is used for either reducing or canceling samples of previous symbols at decision time instants Tap spacing must be equal to symbol interval where
Lecture 1246 The coefficients of the feedback filter (FBF) can be obtained in either of two ways: Recursively (using the LMS algorithm) in a similar fashion as FFF coefficients By calculation from FFF coefficients and channel coefficients (we achieve exact ISI cancellation in this way, but channel estimation is necessary): Decision feedback equalizer (cont.)
Lecture 1248 Channel estimation circuit T T T T T T LMS algorithm Estimated symbols + k:th sample of received signal Estimated channel coefficients Filter length = CIR length
Lecture 1249 Channel estimation circuit (cont.) 1. Acquisition phase Uses training sequence Symbols are known at receiver,. 2. Tracking phase Uses estimated symbols (decision directed mode) Symbol estimates are obtained from the decision circuit (note the delay in the feedback loop!) Since the estimation circuit is adaptive, time-varying channel coefficients can be tracked to some extent. Alternatively: blind estimation (no training sequence)
Lecture 1250 Channel estimation circuit in receiver Channel estimation circuit Channel estimation circuit Equalizer & decision circuit Equalizer & decision circuit Estimated channel coefficients Clean output symbols Received signal samples Symbol estimates (with errors) Training symbols (no errors) Mandatory for MLSE-VA, optional for DFE
Lecture 1251 MLSE-VA receiver structure Matched filter MLSE (VA) MLSE (VA) Channel estimation circuit NW filter MLSE-VA circuit causes delay of estimated symbol sequence before it is available for channel estimation => channel estimates may be out-of-date (in a fast time-varying channel)
Lecture 1252 MLSE-VA receiver structure (cont.) The probability of receiving sample sequence y (note: vector form) of length N, conditioned on a certain symbol sequence estimate and overall channel estimate: Metric to be minimized (select best.. using VA) Objective: find symbol sequence that maximizes this probability This is allowed since noise samples are uncorrelated due to NW (= noise whitening) filter Length of f (k) Since we have AWGN
Lecture 1253 MLSE-VA receiver structure (cont.) We want to choose that symbol sequence estimate and overall channel estimate which maximizes the conditional probability. Since product of exponentials sum of exponents, the metric to be minimized is a sum expression. If the length of the overall channel impulse response in samples (or channel coefficients) is K, in other words the time span of the channel is (K-1)T, the next step is to construct a state trellis where a state is defined as a certain combination of K-1 previous symbols causing ISI on the k:th symbol. K-10k Note: this is overall CIR, including response of matched filter and NW filter
Lecture 1254 MLSE-VA receiver structure (cont.) At adjacent time instants, the symbol sequences causing ISI are correlated. As an example (m=2, K=5): At time k-3 At time k-2 At time k-1 : : Bits causing ISI not causing ISI at time instant At time k Bit detected at time instant 16 states
Lecture 1255 MLSE-VA receiver structure (cont.) State trellis diagram k-2k-1kk-3 Number of states The best state sequence is estimated by means of Viterbi algorithm (VA) k+1 Of the transitions terminating in a certain state at a certain time instant, the VA selects the transition associated with highest accumulated probability (up to that time instant) for further processing. Alphabet size
Lecture 1256 Rake receiver structure and operation Rake receiver a signal processing example that illustrates some important concepts Rake receiver is used in DS-CDMA (Direct Sequence Code Division Multiple Access) systems Rake fingers synchronize to signal components that are received via a wideband multipath channel Important task of Rake receiver is channel estimation Output signals from Rake fingers are combined, for instance calculation of Maximum Ratio Combining (MRC)
Lecture 1257 Principle of RAKE Receiver
Lecture 1258 To start with: multipath channel in which case the received (equivalent low-pass) signal is of the form Suppose a signal s (t) is transmitted. A multipath channel with M physical paths can be presented (in equivalent low-pass signal domain) in form of its Channel Impulse Response (CIR).
Lecture 1259 Sampled channel impulse response Delay ( ) Sampled Channel Impulse Response (CIR) The CIR can also be presented in sampled form using N complex- valued samples uniformly spaced at most 1/W apart, where W is the RF system bandwidth: CIR sampling rate = for instance sampling rate used in receiver during A/D conversion. Uniformly spaced channel samples
Lecture 1260 Rake finger selection Delay ( ) Channel estimation circuit of Rake receiver selects L strongest samples (components) to be processed in L Rake fingers: In the Rake receiver example to follow, we assume L = 3. Only one sample chosen, since adjacent samples may be correlated Only these samples are constructively utilized in Rake fingers
Lecture 1261 Received multipath signal Received signal consists of a sum of delayed (and weighted) replicas of transmitted signal. All signal replicas are contained in received signal : Signal replicas: same signal at different delays, with different amplitudes and phases Summation in channel smeared end result Blue samples indicate signal replicas processed in Rake fingers Green samples only cause interference
Lecture 1263 Channel estimation A B C A B C Amplitude, phase and delay of signal components detected in Rake fingers must be estimated. Each Rake finger requires delay (and often also phase) information of the signal component it is processing. Maximum Ratio Combining (MRC) requires amplitude (and phase if this is utilized in Rake fingers) of components processed in Rake fingers.
Lecture 1264 Rake finger processing Case 1: same code in I and Q branches Case 2: different codes in I and Q branches - for purpose of easy demonstration only - the real case e.g. in IS-95 and WCDMA - no phase synchronization in Rake fingers - phase synchronization in Rake fingers
Lecture 1265 Delay Rake finger processing Received signal To MRC Stored code sequence (Case 1: same code in I and Q branches) I branch Q branch I/Q Output of finger: a complex signal value for each detected bit
Lecture 1266 Correlation vs. matched filtering Received code sequence Received code sequence Stored code sequence Basic idea of correlation: Same result through matched filtering and sampling: Received code sequence Received code sequence Matched filter Matched filter Sampling at t = T Sampling at t = T Same end result (in theory)
Lecture 1267 Rake finger processing Correlation with stored code sequence has different impact on different parts of the received signal = desired signal component detected in i:th Rake finger = other signal components causing interference = other codes causing interference (+ noise... )
Lecture 1268 Rake finger processing Illustration of correlation (in one quadrature branch) with desired signal component (i.e. correctly aligned code sequence) Desired component Stored sequence After multiplication Strong positive/negative correlation result after integration 1 bit0 bit
Lecture 1269 Rake finger processing Illustration of correlation (in one quadrature branch) with some other signal component (i.e. non-aligned code sequence) Other component Stored sequence After multiplication Weak correlation result after integration 1 bit0 bit
Lecture 1270 Rake finger processing Mathematically: Correlation result for bit between Interference from same signal Interference from other signals Desired signal
Lecture 1271 Rake finger processing Set of codes must have both: - good autocorrelation properties (same code sequence) - good cross-correlation properties (different sequences) Large Small
Lecture 1272 Delay Rake finger processing Received signal Stored I code sequence (Case 2: different codes in I and Q branches) I branch Q branch I/Q Stored Q code sequence To MRC for I signal To MRC for Q signal Required: phase synchronization
Lecture 1273 Phase synchronization I/Q When different codes are used in the quadrature branches (as in practical systems such as IS-95 or WCDMA), phase synchronization is necessary. Phase synchronization is based on information within received signal (pilot signal or pilot channel). Signal in I-branch Pilot signal Signal in Q-branch I Q Note: phase synchronization must be done for each finger separately!
Lecture 1274 Weighting Maximum Ratio Combining (MRC) means weighting each Rake finger output with a complex number after which the weighted components are summed on the real axis: Component is weighted Phase is aligned Rake finger output is complex-valued real-valued (Case 1: same code in I and Q branches) Instead of phase alignment: take absolute value of finger outputs...
Lecture 1275 Phase alignment The complex-valued Rake finger outputs are phase-aligned using the following simple operation: Before phase alignment: After phase alignment: Phasors representing complex-valued Rake finger outputs
Lecture 1276 Maximum Ratio Combining The idea of MRC: strong signal components are given more weight than weak signal components. The signal value after Maximum Ratio Combining is: (Case 1: same code in I and Q branches)
Lecture 1277 Maximum Ratio Combining Output signals from the Rake fingers are already phase aligned (this is a benefit of finger-wise phase synchronization). Consequently, I and Q outputs are fed via separate MRC circuits to the decision circuit (e.g. QPSK demodulator). (Case 2: different codes in I and Q branches) Quaternary decision circuit Quaternary decision circuit Finger 1 Finger 2 MRC MRC MRC MRC : I Q I Q I Q