ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.

Slides:



Advertisements
Similar presentations
DSP C5000 Chapter 16 Adaptive Filter Implementation Copyright © 2003 Texas Instruments. All rights reserved.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Periodograms Bartlett Windows Data Windowing Blackman-Tukey Resources:
Component Analysis (Review)
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
ELE Adaptive Signal Processing
AGC DSP AGC DSP Professor A G Constantinides©1 A Prediction Problem Problem: Given a sample set of a stationary processes to predict the value of the process.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Visual Recognition Tutorial
Jonathan Richard Shewchuk Reading Group Presention By David Cline
Performance Optimization
3/24/2006Lecture notes for Speech Communications Multi-channel speech enhancement Chunjian Li DICOM, Aalborg University.
Prediction and model selection
Techniques for studying correlation and covariance structure
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
Linear Prediction Problem: Forward Prediction Backward Prediction
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Adaptive Noise Cancellation ANC W/O External Reference Adaptive Line Enhancement.
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Discrete-Time Fourier Series
1 Chapter 8 The Discrete Fourier Transform 2 Introduction  In Chapters 2 and 3 we discussed the representation of sequences and LTI systems in terms.
Equalization in a wideband TDMA system
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Stability and the s-Plane Stability of an RC Circuit 1 st and 2 nd.
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Algorithm Taxonomy Thus far we have focused on:
Introduction to Adaptive Digital Filters Algorithms
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq.
Discriminant Functions
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Signal and Noise Models SNIR Maximization Least-Squares Minimization MMSE.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
Unit-V DSP APPLICATIONS. UNIT V -SYLLABUS DSP APPLICATIONS Multirate signal processing: Decimation Interpolation Sampling rate conversion by a rational.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
Principles of the Global Positioning System Lecture 12 Prof. Thomas Herring Room ;
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno
3.7 Adaptive filtering Joonas Vanninen Antonio Palomino Alarcos.
DSP-CIS Part-IV : Filter Banks & Subband Systems Chapter-12 : Frequency Domain Filtering Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Overview of Adaptive Filters Quote of the Day When you look at yourself from a universal standpoint, something inside always reminds or informs you that.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Autoregressive (AR) Spectral Estimation
Discrete-time Random Signals
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Neural Networks 2nd Edition Simon Haykin 柯博昌 Chap 3. Single-Layer Perceptrons.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
10 1 Widrow-Hoff Learning (LMS Algorithm) ADALINE Network  w i w i1  w i2  w iR  =
CSC321: Neural Networks Lecture 9: Speeding up the Learning
LECTURE 07: TIME-DELAY ESTIMATION AND ADPCM
Widrow-Hoff Learning (LMS Algorithm).
Properties Of the Quadratic Performance Surface
Modern Spectral Estimation
لجنة الهندسة الكهربائية
Equalization in a wideband TDMA system
Instructor :Dr. Aamer Iqbal Bhatti
METHOD OF STEEPEST DESCENT
16. Mean Square Estimation
Presentation transcript:

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence Covariance Matrix Diagonalization Resources: PP: LMS Tutorial CNX: LMS Adaptive Filter DM: DSP Echo Cancellation WIKI: Echo Cancellation ISIP: Echo Cancellation PP: LMS Tutorial CNX: LMS Adaptive Filter DM: DSP Echo Cancellation WIKI: Echo Cancellation ISIP: Echo Cancellation URL:.../publications/courses/ece_8423/lectures/current/lecture_05.ppt.../publications/courses/ece_8423/lectures/current/lecture_05.ppt MP3:.../publications/courses/ece_8423/lectures/current/lecture_05.mp3.../publications/courses/ece_8423/lectures/current/lecture_05.mp3 LECTURE 05: LMS ADAPTIVE FILTERS

ECE 8423: Lecture 05, Slide 1 Apply our fixed, finite length LMS filter to the problem of a continuously adaptive filter that can track changes in a signal. Specification of the filter involves three essential elements: the structure of the filter, the overall system configuration, and the performance criterion for adaptation. The Basic Adaptive Filter + – Definitions: Data vector: Filter coefficients: Convolution: Error Signal: To optimize the filter coefficients sample by sample, we need an iterative solution to the normal equations. Alternately, we could compute a batch solution for every new sample, but this wastes computational resources.

ECE 8423: Lecture 05, Slide 2 Iterative Solutions to the Normal Equations Recall our objective function: This is a quadratic performance surface for which a global minimum can be found using a gradient descent approach (following the derivative from an initial guess, f 0. Recall our optimal solution was: A general iterative solution is given by: The constant,  n, is a step-size parameter and p n defines the search direction. A general class of iterative algorithms are those which iterate based on the gradient of the mean-squared error: and D n is an (LxL) weighting matrix. A large number of possible algorithms exist based on the selection of D n and  n.

ECE 8423: Lecture 05, Slide 3 The Method of Steepest Descent Principle: move in a direction opposite to the gradient of the performance index, J, and by a distance proportional to the magnitude of that gradient: Combining expressions, we can write our update equation: If we differentiate J with respect to f: The term steepest descent arises from the fact that the gradient is normal to lines of equal cost. For a parabolic surface, this is the direction of steepest descent. The following property can be proven: where max is the largest eigenvalue of the autocorrelation matrix.

ECE 8423: Lecture 05, Slide 4 The LMS Adaptive Filter To design a filter which is responsive to changes in the signal, we need an iterative structure that is dependent on the (local properties) of the signal. We could use: where the autocorrelation and cross-correlations are computed with respect to the nth sample. But this is not computationally efficient. Instead, we can estimate the gradient from the instantaneous error: We can write this in terms of the error signal and the input signal:

ECE 8423: Lecture 05, Slide 5 Observations This is known as the LMS adaptive filter update. The filter impulse response at each sample is equal to its previous value plus a term proportional to the dot product of the error and the signal (recall the orthogonality principle). This approach is widely used (e.g., modems, acoustic echo cancellers). A group delay can be applied to the filter for applications where there is a known fixed delay. It is a computationally simple update: approx. L multiplications and additions per step. This is significant since the filter sometimes has to be long (e.g., echo cancellation applications). The filter tracks instantaneous variations in the signal, which can be good (channel switching) and bad (nonstationary noise). Hence, control of the adaptation speed becomes critical. The filter is often initialized to a value of zero, which is safe (why?). For an excellent treatment on an echo cancellation application, including discussion of many important DSP issues, see DM: DSP Echo Cancellation.DM: DSP Echo Cancellation

ECE 8423: Lecture 05, Slide 6 Performance Considerations Two major concerns about the performance of filter: (1) stability and convergence; (2) the mean-squared error. One approach to analyzing the performance of filter is to compare its performance on a stationary signal. Recall we can analyze the long-term solution of the normal equations in terms of the Fourier transform: Convergence of the filter can be obtained by applying the so-called independence assumption: This condition is stronger than requiring the input to be white noise. We begin with our expression for the filter coefficients: We can take the expectation:

ECE 8423: Lecture 05, Slide 7 Convergence of the Mean We can define an error vector: Subtracting f * from both sides of our update equation: We can prove convergence in the mean if we can prove that: We can decouple the update equation using eigenvalue analysis: We can define a rotated error vector: Multiple both sides by Q t and note Q t Q=I: Since  is diagonal:

ECE 8423: Lecture 05, Slide 8 Convergence of the Mean (Cont.) This equation: is a first-order difference equation, whose solution in terms of is: Consequently: The eigenvalues of R are all real and positive since R is symmetric. Therefore: This must be satisfied for all j and all eigenvalues j : In practice, this bound is too high and too difficult to estimate online. A stronger condition can be derived: But: so: This is a very important result because it tells us how to set the adaptation speed in terms of something we can measure.

ECE 8423: Lecture 05, Slide 9 Diagonalization of a Correlation Matrix Consider a correlation matrix, R, with eigenvalues. Define a matrix where q i form a set of real eiqenvectors satisfying or. Define a spectral matrix,  : We can write the following relations: The first two equations follow from the definition of an eigenvalue. The third equation summarizes the second equation in matrix form. The last equation follows by postmultiplying by Q t and noting that Q Q t = I.

ECE 8423: Lecture 05, Slide 10 We introduced the FIR adaptive filter and the LMS adaptive filter. We derived its update equations. We also derived its stability and convergence properties for stationary signals, which led to a simple method for setting the adaptation speed in terms of a quantity we can easily measure. Things we did not discuss but are treated extensively in the textbook:  The eigenvalue disparity problem: convergence is not uniform and depends on the spread of the eigenvalues of the autocorrelation matrix.  Time-constants for convergence: the speed of convergence creates an overshoot/undershoot type problem.  Estimation of time-delay or group-delay of the filter: often set a priori in practice based on application knowledge or constraints.  Steady-state mean squared error value of the error: related to the power of the input signal and the adaptation speed.  Transfer function analysis for deterministic signals: of historical significance but not of great practical value since we are most concerned about stochastic signals. Summary

ECE 8423: Lecture 05, Slide 11 Echo Cancellation for Analog Telephony