Techniques to Mitigate Fading Effects

Slides:



Advertisements
Similar presentations
Introduction[1] •Three techniques are used independently or in tandem to improve receiver signal quality •Equalization compensates for.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
Adaptive IIR Filter Terry Lee EE 491D May 13, 2005.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
1/44 1. ZAHRA NAGHSH JULY 2009 BEAM-FORMING 2/44 2.
ISSPIT Ajman University of Science & Technology, UAE
Introduction To Equalization Presented By : Guy Wolf Roy Ron Guy Shwartz (Adavanced DSP, Dr.Uri Mahlab) HAIT
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
STUDY OF DS-CDMA SYSTEM AND IMPLEMENTATION OF ADAPTIVE FILTERING ALGORITHMS By Nikita Goel Prerna Mayor Sonal Ambwani.
Adaptive FIR Filter Algorithms D.K. Wise ECEN4002/5002 DSP Laboratory Spring 2003.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Co-Channel Interference
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Adaptive Noise Cancellation ANC W/O External Reference Adaptive Line Enhancement.
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Digital Communications Fredrik Rusek Chapter 10, adaptive equalization and more Proakis-Salehi.
Dept. of EE, NDHU 1 Chapter Three Baseband Demodulation/Detection.
Equalization in a wideband TDMA system
Algorithm Taxonomy Thus far we have focused on:
Introduction to Adaptive Digital Filters Algorithms
1 Techniques to control noise and fading l Noise and fading are the primary sources of distortion in communication channels l Techniques to reduce noise.
Rake Reception in UWB Systems Aditya Kawatra 2004EE10313.
By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq.
Wireless Communication Technologies 1 Outline Introduction OFDM Basics Performance sensitivity for imperfect circuit Timing and.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
Unit-V DSP APPLICATIONS. UNIT V -SYLLABUS DSP APPLICATIONS Multirate signal processing: Decimation Interpolation Sampling rate conversion by a rational.
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
Adaphed from Rappaport’s Chapter 5
EE513 Audio Signals and Systems
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
Space Time Codes. 2 Attenuation in Wireless Channels Path loss: Signals attenuate due to distance Shadowing loss : absorption of radio waves by scattering.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno
Equalization Techniques By: Mohamed Osman Ahmed Mahgoub.
Introduction To Equalization. InformationsourcePulsegeneratorTransfilterchannel X(t) ReceiverfilterA/D + Channel noise Channel noisen(t) Digital Processing.
Autoregressive (AR) Spectral Estimation
Spectrum Sensing In Cognitive Radio Networks
Neural Networks 2nd Edition Simon Haykin 柯博昌 Chap 3. Single-Layer Perceptrons.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
1 Hyeong-Seok Yu Vada Lab. Hyeong-Seok Yu Vada Lab. Baseband Pulse Transmission Correlative-Level Coding.
Impulse Response Measurement and Equalization Digital Signal Processing LPP Erasmus Program Aveiro 2012 Digital Signal Processing LPP Erasmus Program Aveiro.
ADAPTIVE SMART ANTENNA Prepared By: Shivangi Jhavar Guided By: Mr.Bharat Patil.
Chapter 7. Classification and Prediction
Digital transmission over a fading channel
Channel Equalization Techniques
Techniques to control noise and fading
Adaptive Filters Common filter design methods assume that the characteristics of the signal remain constant in time. However, when the signal characteristics.
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Pipelined Adaptive Filters
By Nikita Goel Prerna Mayor Sonal Ambwani
Sampling rate conversion by a rational factor
Equalization in a wideband TDMA system
Assoc. Prof. Dr. Peerapol Yuvapoositanon
Wireless Communication Technology
Instructor :Dr. Aamer Iqbal Bhatti
Error rate due to noise In this section, an expression for the probability of error will be derived The analysis technique, will be demonstrated on a binary.
لجنة الهندسة الكهربائية
Equalization in a wideband TDMA system
Spatial and Temporal Communication Theory Using Adaptive Antenna Array
METHOD OF STEEPEST DESCENT
On the Design of RAKE Receivers with Non-uniform Tap Spacing
Whitening-Rotation Based MIMO Channel Estimation
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Chapter - 3 Single Layer Percetron
16. Mean Square Estimation
Presentation transcript:

Techniques to Mitigate Fading Effects المحاضرة الثامنة 5/21/2013 Omar Abu-Ella

Introduction Wireless communications require signal processing techniques that improve the link performance. Equalization, Diversity and Channel Coding are channel impairment improvement techniques. 5/21/2013 Omar Abu-Ella

Equalization Equalization compensates for Inter Symbol Interference (ISI) created by multipath. Equalizer is a filter at the receiver whose impulse response is the inverse of the channel impulse response. Equalizers find their use in frequency selective fading channels. 5/21/2013 Omar Abu-Ella

Diversity Diversity is another technique used to compensate fast/slow fading and is usually implemented using two or more receiving dimensions. Macro-diversity: mitigates large scale fading. Micro-diversity: mitigates small scale fading. Space diversity Time diversity Frequency diversity Angular diversity Polarization diversity 5/21/2013 Omar Abu-Ella

Channel Coding Channel coding improves wireless communication link performance by adding redundant data bits in the transmitted message. At the baseband portion of the transmitter, a channel coder maps a digital message sequence into another specific code sequence containing greater number of bits than original contained in the message. Channel Coding is used to correct deep fading or spectral null. 5/21/2013 Omar Abu-Ella

General Framework Selective freq. fading Fast\slow fading Deep fading 5/21/2013 Omar Abu-Ella

Equalization ISI has been identified as one of the major obstacles to high speed data transmission over mobile radio channels. If the modulation bandwidth exceeds the coherence bandwidth of the radio channel (i.e., frequency selective fading), modulation pulses are spread in time, causing ISI. Classification: Time varying wireless channel requires adaptive equalization. An adaptive equalizers is classified into two major categories: non-blind, blind equalizers. A non-blind adaptive equalizer has two phases of operation: training and tracking. 5/21/2013 Omar Abu-Ella

Linear x Nonlinear 5/21/2013 Omar Abu-Ella

Adaptive Equalizers 5/21/2013 Omar Abu-Ella

Classification of Equalizers Non-blind x blind equalizers Non-blind adaptive equalization algorithms rely on statistical knowledge about the transmitted signal in order to converge to a solution, i.e., (the optimum filter coefficients “weights”) This is typically accomplished through the use of a pilot training sequence sent over the channel to the receiver to help it identifying the desired signal. 5/21/2013 Omar Abu-Ella

Blind adaptive algorithms equalization do not require prior training, and hence they are referred to as “blind” algorithms.” These algorithms attempt to extract significant characteristic of the transmitted signal in order to separate it from other signals in the surrounding environment. 5/21/2013 Omar Abu-Ella

Training Sequence: Initially a known, fixed length training sequence is sent by the transmitter so that the receiver equalizer may average to a proper setting. Training sequence is typically a pseudo-random binary signal or a fixed, of prescribed bit pattern. The training sequence is designed to permit an equalizer at the receiver to acquire the proper filter coefficient in the worst possible channel condition. An adaptive filter at the receiver thus uses a recursive algorithm to evaluate the channel and estimate filter coefficients to compensate for the channel. 5/21/2013 Omar Abu-Ella

A Mathematical Framework The signal received by the equalizer is given by d(t) is the transmitted signal, h(t) is the combined impulse response of the transmitter, channel and the RF/IF section of the receiver and nb (t) denotes the baseband noise. The main goal of any equalization process is to satisfy this equation optimally. In frequency domain it can be written as which indicates that an equalizer is actually an inverse filter of the channel. 5/21/2013 Omar Abu-Ella

Zero Forcing Equalization Disadvantage: Since Heq (f) is inverse of Hch (f) so inverse filter may excessively amplify the noise at frequencies where the channel spectrum has high attenuation, so it is rarely used for wireless link except for static channels with high SNR. 5/21/2013 Omar Abu-Ella

A Generic Adaptive Equalizer 5/21/2013 Omar Abu-Ella

Adaptive equalizer The input to the equalizer as the tap coefficient vector as the output sequence of the equalizer yk is the inner product of xk and wk i.e. The error signal is defined as 5/21/2013 Omar Abu-Ella

The MSE can be expressed as Assuming dk and xk to be jointly stationary, the Mean Square Error (MSE) is given as The MSE can be expressed as where the signal variance σ2k, d = E[d2k] and the cross correlation vector p between the desired response and the input signal is defined as The input correlation matrix R is defined as an (N + 1) (N + 1) square matrix, where 5/21/2013 Omar Abu-Ella

Hence, MMSE is given by the equation Clearly, MSE is a function of wk. On equating wk to 0, we get the condition for minimum MSE (MMSE) which is known as Wiener solution: Hence, MMSE is given by the equation 5/21/2013 Omar Abu-Ella

Choice of Algorithms for Adaptive Equalization Factors which determine algorithm's performance are: Rate of convergence: Number of iterations required for an algorithm, to converge close enough to optimal solution. Computational complexity: Number of operations required to make one complete iteration of the algorithm. Numerical properties: robustness s against computation errors, which influence the stability of the algorithm. 5/21/2013 Omar Abu-Ella

Classic equalizer algorithms Three classic equalizer algorithms are primitive for most of Zero Forcing Algorithm (ZF) today's wireless standards: Least Mean Square Algorithm (LMS) Recursive Least Square Algorithm (RLS) Constant Modulus Algorithms (CMA) 5/21/2013 Omar Abu-Ella

MSE Criterion Unknown Parameter (Equalizer filter response) Received Signal Desired Signal Mean Square Error between the received signal and the desired signal, filtered by the equalizer filter LS Algorithm LMS Algorithm 5/21/2013 Omar Abu-Ella

Least Mean Square (LMS) Algorithm Introduced by Widrow & Hoff in 1959 Simple, no matrices calculation involved in the adaptation In the family of stochastic gradient algorithms Approximation of the steepest–descent method Based on the Minimum Mean square Error (MMSE) criterion. Adaptive process: recursive adjustment of filter tap weights 5/21/2013 Omar Abu-Ella

Least Mean Square (LMS) Algorithm In practice, the minimization of the MSE is carried out recursively, and may be performed by use of the stochastic gradient algorithm. It is the simplest equalization algorithm and requires only 2N+1 operations per iteration. LMS weights is computed iteratively by where the subscript k denotes the kth delay stage in the equalizer and µ is the step size which controls the convergence rate and stability of the algorithm. 5/21/2013 Omar Abu-Ella

Notations Input signal (vector): u(n) Autocorrelation matrix of input signal: Ruu = E[u(n)uH(n)] Desired response: d(n) Cross-correlation vector between u(n) and d(n): Pud = E[u(n)d*(n)] Filter tap weights: w(n) Filter output: y(n) = wH(n)u(n) Estimation error: e(n) = d(n) – y(n) Mean Square Error: J = E[ |e(n)|2 ] = E[e(n)e*(n)] 5/21/2013 Omar Abu-Ella

System Block diagram using LMS u[n] = Input signal from the channel ; d[n] = Desired Response H[n] = Some training sequence generator e[n] = Error feedback between : A.) desired response. B.) Equalizer FIR filter output W = FIR filter using tap weights vector 5/21/2013 Omar Abu-Ella

Steepest Descent Method Steepest decent algorithm is a gradient based method which employs recursive solution over problem (cost function) The current equalizer taps vector is w(n) and the next sample equalizer taps vector weight is w(n+1), We could estimate the w(n+1) vector by this approximation: The gradient is a vector pointing in the direction of the change in filter coefficients that will cause the greatest increase in the error signal. Because the goal is to minimize the error, however, the filter coefficients updated in the direction opposite the gradient; that is why the gradient term is negated. The constant μ is a step-size. After repeatedly adjusting each coefficient in the direction opposite to the gradient of the error, the adaptive filter should converge.   Omar Abu-Ella 5/21/2013

Steepest Descent Example Given the following function we need to obtain the vector that would give us the absolute minimum. It is obvious that give us the minimum. Now lets find the solution by the steepest descend method 5/21/2013 Omar Abu-Ella

Steepest Descent Example We start by assuming (C1 = 5, C2 = 7) We select the constant µ. If it is too big, we miss the minimum. If it is too small, it would take us a lot of time to het the minimum. We would select = 0.1 The gradient vector is: So our iterative equation is: 5/21/2013 Omar Abu-Ella

Steepest Descent Example Initial guess Minimum As we can see, the vector [c1,c2] converges to the value which would yield the function minimum and the speed of this convergence depends on µ. 5/21/2013 Omar Abu-Ella

MMSE criterion for LMS MMSE – Minimum mean square error MSE = To obtain the LMS MMSE we should derivative the MSE and compare it to (0): Omar Abu-Ella 5/21/2013

MMSE criterion for LMS finally we get: By equating the derivative to zero we get the MMSE: This calculation is complicated for the DSP (calculating the inverse matrix), and can cause the system to not being stable because: If there are NULLs in the noise, we could get very large values in the inverse matrix. Also we could not always know the Auto correlation matrix of the input and the cross-correlation vector, so we would like to make an approximation of this. 5/21/2013 Omar Abu-Ella

LMS – Approximation of the Steepest Descent Method w(n+1) = w(n) + 2*[P – R w(n)] <= According the MMSE criterion We assume the following assumptions: Input vectors :u(n), u(n-1),…,u(1) statistically independent vectors. Input vector u(n) and desired response d(n), are statistically independent of d(n), d(n-1),…,d(1) Input vector u(n) and desired response d(n) are Gaussian-distributed R.V. Environment is wide-sense stationary; In LMS, the following estimates are used: R^uu = u(n)uH(n) – Autocorrelation matrix of input signal P^ud = u(n)d*(n) - Cross-correlation vector between u[n] and d[n]. *** Or we could calculate the gradient of |e[n]|2 instead of E{|e[n]|2 } 5/21/2013 Omar Abu-Ella

LMS Algorithm We get the final result: Omar Abu-Ella 5/21/2013

LMS Step-size The convergence rate of the LMS algorithm is slow due to the fact that there is only one parameter, the step size μ, that controls the adaptation rate. To prevent the adaptation from becoming unstable, the value of μ is chosen from where λi is the ith eigenvalue of the autocorrelation (covariance) matrix R. 5/21/2013 Omar Abu-Ella

LMS Stability The size of the step size determines the algorithm convergence rate: Too small step size will make the algorithm take a lot of iterations. Too big step size will not convergence the weight taps. Rule Of Thumb: Where, N is the equalizer length Pr, is the received power (signal+noise) that could be estimated in the receiver. 5/21/2013 Omar Abu-Ella

LMS Convergence using different μ 5/21/2013 Omar Abu-Ella

LMS : Pros & Cons LMS – Advantage: Simplicity of implementation Do NOT neglecting the noise like Zero-Forcing equalizer Avoid the need for calculating an inverse matrix. LMS – Disadvantage: Slow Convergence Demands using of training sequence as reference Thus decreasing the communication BW. 5/21/2013 Omar Abu-Ella

Recursive Least Squares (RLS) 5/21/2013 Omar Abu-Ella

Blind Algorithms “Blind” adaptive algorithms are defined as those algorithms which do not need a reference or training sequence to determine the required complex weight vector. They try to restore some type of property to the received input data vector. A general property of the complex envelope and digital signals is the constant modulus of these received signals 5/21/2013 Omar Abu-Ella

Constant Modulus Algorithm (CMA) used for constant envelope modulation. 5/21/2013 Omar Abu-Ella