Adaptive Signal Processing

Slides:



Advertisements
Similar presentations
DSP C5000 Chapter 16 Adaptive Filter Implementation Copyright © 2003 Texas Instruments. All rights reserved.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
CY3A2 System identification Modelling Elvis Impersonators Fresh evidence that pop stars are more popular dead than alive. The University of Missouri’s.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ELE Adaptive Signal Processing
AGC DSP AGC DSP Professor A G Constantinides©1 A Prediction Problem Problem: Given a sample set of a stationary processes to predict the value of the process.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Lecture 11: Recursive Parameter Estimation
1/44 1. ZAHRA NAGHSH JULY 2009 BEAM-FORMING 2/44 2.
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
SYSTEMS Identification
Development of Empirical Models From Process Data
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
Prediction and model selection
Ordinary least squares regression (OLS)
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Adaptive FIR Filter Algorithms D.K. Wise ECEN4002/5002 DSP Laboratory Spring 2003.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Normalised Least Mean-Square Adaptive Filtering
Linear Prediction Problem: Forward Prediction Backward Prediction
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Adaptive Noise Cancellation ANC W/O External Reference Adaptive Line Enhancement.
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Equalization in a wideband TDMA system
Algorithm Taxonomy Thus far we have focused on:
Introduction to Adaptive Digital Filters Algorithms
By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq.
4/5/00 p. 1 Postacademic Course on Telecommunications Module-3 Transmission Marc Moonen Lecture-6 Adaptive Equalization K.U.Leuven/ESAT-SISTA Module-3.
Eigenstructure Methods for Noise Covariance Estimation Olawoye Oyeyele AICIP Group Presentation April 29th, 2003.
Real time DSP Professors: Eng. Julian Bruno Eng. Mariano Llamedo Soria.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
Unit-V DSP APPLICATIONS. UNIT V -SYLLABUS DSP APPLICATIONS Multirate signal processing: Decimation Interpolation Sampling rate conversion by a rational.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
DSP C5000 Chapter 16 Adaptive Filter Implementation Copyright © 2003 Texas Instruments. All rights reserved.
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
AGC DSP AGC DSP Professor A G Constantinides©1 Eigenvector-based Methods A very common problem in spectral estimation is concerned with the extraction.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
CY3A2 System identification
EE513 Audio Signals and Systems
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Dept. E.E./ESAT-STADIUS, KU Leuven
An Introduction To The Kalman Filter By, Santhosh Kumar.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno
Overview of Adaptive Filters Quote of the Day When you look at yourself from a universal standpoint, something inside always reminds or informs you that.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
ELG5377 Adaptive Signal Processing Lecture 15: Recursive Least Squares (RLS) Algorithm.
Impulse Response Measurement and Equalization Digital Signal Processing LPP Erasmus Program Aveiro 2012 Digital Signal Processing LPP Erasmus Program Aveiro.
State-Space Recursive Least Squares with Adaptive Memory College of Electrical & Mechanical Engineering National University of Sciences & Technology (NUST)
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
STATISTICAL ORBIT DETERMINATION Coordinate Systems and Time Kalman Filtering ASEN 5070 LECTURE 21 10/16/09.
Instructor :Dr. Aamer Iqbal Bhatti
Modern Spectral Estimation
The regression model in matrix form
لجنة الهندسة الكهربائية
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
METHOD OF STEEPEST DESCENT
Principles of the Global Positioning System Lecture 11
Presentation transcript:

Adaptive Signal Processing Problem: Equalise through a FIR filter the distorting effect of a communication channel that may be changing with time. If the channel were fixed then a possible solution could be based on the Wiener filter approach We need to know in such case the correlation matrix of the transmitted signal and the cross correlation vector between the input and desired response. When the the filter is operating in an unknown environment these required quantities need to be found from the accumulated data. Professor A G Constantinides©

Adaptive Signal Processing The problem is particularly acute when not only the environment is changing but also the data involved are non-stationary In such cases we need temporally to follow the behaviour of the signals, and adapt the correlation parameters as the environment is changing. This would essentially produce a temporally adaptive filter. Professor A G Constantinides©

Adaptive Signal Processing A possible framework is: Algorithm Professor A G Constantinides©

Adaptive Signal Processing Applications are many Digital Communications Channel Equalisation Adaptive noise cancellation Adaptive echo cancellation System identification Smart antenna systems Blind system equalisation And many, many others Professor A G Constantinides©

Professor A G Constantinides© Applications Professor A G Constantinides©

Adaptive Signal Processing Echo Cancellers in Local Loops - + Rx1 Rx2 Tx1 Echo canceller Adaptive Algorithm Hybrid Local Loop Professor A G Constantinides©

Adaptive Signal Processing Adaptive Noise Canceller Noise Signal +Noise - + FIR filter Adaptive Algorithm PRIMARY SIGNAL REFERENCE SIGNAL Professor A G Constantinides©

Adaptive Signal Processing System Identification Unknown System Signal - + FIR filter Adaptive Algorithm Professor A G Constantinides©

Adaptive Signal Processing System Equalisation Unknown System Signal - + FIR filter Adaptive Algorithm Delay Professor A G Constantinides©

Adaptive Signal Processing Adaptive Predictors Signal - + FIR filter Adaptive Algorithm Delay Professor A G Constantinides©

Adaptive Signal Processing Adaptive Arrays Linear Combiner Interference Professor A G Constantinides©

Adaptive Signal Processing Basic principles: 1) Form an objective function (performance criterion) 2) Find gradient of objective function with respect to FIR filter weights 3) There are several different approaches that can be used at this point 3) Form a differential/difference equation from the gradient. Professor A G Constantinides©

Adaptive Signal Processing Let the desired signal be The input signal The output Now form the vectors So that Professor A G Constantinides©

Adaptive Signal Processing The form the objective function where Professor A G Constantinides©

Adaptive Signal Processing We wish to minimise this function at the instant n Using Steepest Descent we write But Professor A G Constantinides©

Adaptive Signal Processing So that the “weights update equation” Since the objective function is quadratic this expression will converge in m steps The equation is not practical If we knew and a priori we could find the required solution (Wiener) as Professor A G Constantinides©

Adaptive Signal Processing However these matrices are not known Approximate expressions are obtained by ignoring the expectations in the earlier complete forms This is very crude. However, because the update equation accumulates such quantities, progressive we expect the crude form to improve Professor A G Constantinides©

Professor A G Constantinides© The LMS Algorithm Thus we have Where the error is And hence can write This is sometimes called the stochastic gradient descent Professor A G Constantinides©

Professor A G Constantinides© Convergence The parameter is the step size, and it should be selected carefully If too small it takes too long to converge, if too large it can lead to instability Write the autocorrelation matrix in the eigen factorisation form Professor A G Constantinides©

Professor A G Constantinides© Convergence Where is orthogonal and is diagonal containing the eigenvalues The error in the weights with respect to their optimal values is given by (using the Wiener solution for We obtain Professor A G Constantinides©

Professor A G Constantinides© Convergence Or equivalently I.e. Thus we have Form a new variable Professor A G Constantinides©

Professor A G Constantinides© Convergence So that Thus each element of this new variable is dependent on the previous value of it via a scaling constant The equation will therefore have an exponential form in the time domain, and the largest coefficient in the right hand side will dominate Professor A G Constantinides©

Professor A G Constantinides© Convergence We require that Or In practice we take a much smaller value than this Professor A G Constantinides©

Professor A G Constantinides© Estimates Then it can be seen that as the weight update equation yields And on taking expectations of both sides of it we have Or Professor A G Constantinides©

Professor A G Constantinides© Limiting forms This indicates that the solution ultimately tends to the Wiener form I.e. the estimate is unbiased Professor A G Constantinides©

Professor A G Constantinides© Misadjustment The excess mean square error in the objective function due to gradient noise Assume uncorrelatedness set Where is the variance of desired response and is zero when uncorrelated. Then misadjustment is defined as Professor A G Constantinides©

Professor A G Constantinides© Misadjustment It can be shown that the misadjustment is given by Professor A G Constantinides©

Professor A G Constantinides© Normalised LMS To make the step size respond to the signal needs In this case And misadjustment is proportional to the step size. Professor A G Constantinides©

Professor A G Constantinides© Transform based LMS Algorithm Transform Inverse Transform Professor A G Constantinides©

Least Squares Adaptive with We have the Least Squares solution However, this is computationally very intensive to implement. Alternative forms make use of recursive estimates of the matrices involved. Professor A G Constantinides©

Recursive Least Squares Firstly we note that We now use the Inversion Lemma (or the Sherman-Morrison formula) Let Professor A G Constantinides©

Recursive Least Squares (RLS) Let Then The quantity is known as the Kalman gain Professor A G Constantinides©

Recursive Least Squares Now use in the computation of the filter weights From the earlier expression for updates we have And hence Professor A G Constantinides©

Professor A G Constantinides© Kalman Filters Kalman filter is a sequential estimation problem normally derived from The Bayes approach The Innovations approach Essentially they lead to the same equations as RLS, but underlying assumptions are different Professor A G Constantinides©

Professor A G Constantinides© Kalman Filters The problem is normally stated as: Given a sequence of noisy observations to estimate the sequence of state vectors of a linear system driven by noise. Standard formulation Professor A G Constantinides©

Professor A G Constantinides© Kalman Filters Kalman filters may be seen as RLS with the following correspondence Sate space RLS Sate-Update matrix Sate-noise variance Observation matrix Observations State estimate Professor A G Constantinides©

Cholesky Factorisation In situations where storage and to some extend computational demand is at a premium one can use the Cholesky factorisation tecchnique for a positive definite matrix Express , where is lower triangular There are many techniques for determining the factorisation Professor A G Constantinides©