Presentation is loading. Please wait.

Presentation is loading. Please wait.

RLSELE 774 - Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.

Similar presentations


Presentation on theme: "RLSELE 774 - Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters."— Presentation transcript:

1 RLSELE 774 - Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters

2 ELE 774 - Adaptive Signal Processing2 RLS Definition With the arrival of new data samples estimates are updated recursively. Introduce a weighting factor to the sum-of-error-squares definition Weighting factor Forgetting factor : real, positive, <1, →1 =1 → ordinary LS 1/(1- ): memory of the algorithm (ordinary LS has infinite memory) w(n) is kept fixed during the observation interval 1≤i ≤n for which the cost function  (n) is defined. two time-indices n: outer, i: inner

3 ELE 774 - Adaptive Signal Processing3 RLS Definition

4 ELE 774 - Adaptive Signal Processing4 RLS Regularisation LS cost function can be ill-posed  There is insufficient information in the input data to reconstruct the input-output mapping uniquely  Uncertainty in the mapping due to measurement noise. To overcome the problem, take ‘prior information’ into account Prewindowing is assumed!  (not the covariance method) Regularisation term Smooths and stabilises the solution  : regularisation parameter

5 ELE 774 - Adaptive Signal Processing5 RLS Normal Equations From method of least-squares we know that then the time-average autocorrelation matrix of the input u(n) becomes Similarly, the time-average cross-correlation vector between the tap inputs and the desired response is (unaffected from regularisation) Hence, the optimum (in the LS sense) filter coefficients should satisfy autocorrelation matrix is always non-singular due to this term. (  -1 always exists!)

6 ELE 774 - Adaptive Signal Processing6 RLS Recursive Computation Isolate the last term for i=n: Similarly We need to calculate  -1 to find w → direct calculation can be costly!  Use Matrix Inversion Lemma (MIL)

7 ELE 774 - Adaptive Signal Processing7 RLS Recursive Least-Squares Algorithm Let Then, using MIL Now, letting We obtain inverse correlation matrix gain vector Riccati equation

8 ELE 774 - Adaptive Signal Processing8 RLS Recursive Least-Squares Algorithm Rearranging How can w be calculated recursively? Let After substituting the recursion for P(n) into the first term we obtain But P(n)u(n)=k(n), hence

9 ELE 774 - Adaptive Signal Processing9 RLS Recursive Least-Squares Algorithm The term is called the a priori estimation error, Whereas the term is called the a posteriori estimation error. (Why?) Summary; the update eqn.  -1 is calculated recursively and with scalar division Initialisation: (n=0)  If no a priori information exists  gain vector a priori error regularisation parameter

10 ELE 774 - Adaptive Signal Processing10 RLS Recursive Least-Squares Algorithm

11 ELE 774 - Adaptive Signal Processing11 RLS Recursive Least-Squares Algorithm

12 ELE 774 - Adaptive Signal Processing12 RLS Recursion for the Sum-of-Weighted-Error-Squares From LS, we know that where Then Hence

13 ELE 774 - Adaptive Signal Processing13 RLS Convergence Analysis Assume stationary environment and =1 To avoid transitions, consider times n>M Assumption I: The desired response d(n) and the tap-input vector u(n) are related by the linear regression model where w o is the regression parameter vector and e o (n) is the measurement noise. The noise e o (n) is white with zero mean and variance  o 2 which makes it independent of the regressor u(n).

14 ELE 774 - Adaptive Signal Processing14 RLS Convergence Analysis Assumption II: The input vector u(n) is drawn from a stochastic process, which is ergodic in the autocorrelation function.  R: ensemble average,  : time average autocorrelation matrices Assumption III: The fluctuations in the weight-error vector  (n) are slow compared with those of the input signal vector u(n).  Justification:  (n) is an accumulation of the a priori error → hence, the input →Smoothing (low-pass filtering) effect.  Consequence:

15 ELE 774 - Adaptive Signal Processing15 RLS Convergence in Mean Value Then, Substituting into w(n) and taking the expectation, we get Applying Assumptions I and II, above expression simplifies to  biased estimate due to the initialization, but bias →0 as n→∞. =1

16 ELE 774 - Adaptive Signal Processing16 RLS Mean-Square Deviation Weight-error correlation matrix and invoking Assumption I and simplifying we obtain Then But, mean-square-deviation is

17 ELE 774 - Adaptive Signal Processing17 RLS Mean-Square Deviation Observations: Mean-Square Deviation D (n)  is proportional to the sum of reciprocal of eigenvalues of R The sensitivity of the RLS algorithm to eigenvalue spread is determined by the reciprocal of the smallest eigenvalue. ill-conditioned LS problems may lead to poor convergence behaviour.  decays almost linearly with the number of iterations w(n) converges to the Wiener solution w o as n grows. ^

18 ELE 774 - Adaptive Signal Processing18 RLS Ensemble-Average Learning Curve There are two error terms  A priori error,  A posteriori error, Learning curve considering  (n) yields the same general shape as that for the LMS algorithm.  Both RLS and LMS learning curves can be compared with this choice. The learning curve for RLS (a posteriori error) is We know that

19 ELE 774 - Adaptive Signal Processing19 RLS Ensemble-Average Learning Curve Substitution yields 1 st term (Assumption I) 2 nd term (Assumption III) 3 & 4 th terms (Assumption I)

20 ELE 774 - Adaptive Signal Processing20 RLS Ensemble-Average Learning Curve Combining all terms Observations  The ensemble-average learning curve of the RLS algorithm converges in about 2M iterations Typically an order of magnitude faster than LMS  As the number of iterations n→∞ the MSE J’(n) approaches the final value σ o 2 which is the variance of the measur. error e o (n). in theory RLS produces zero excess MSE!.  Convergence of the RLS algorithm in the mean square is independent of the eigenvalues of the ensemble-average correlation matrix R of the input vector u(n).

21 ELE 774 - Adaptive Signal Processing21 RLS Ensemble-Average Learning Curve


Download ppt "RLSELE 774 - Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters."

Similar presentations


Ads by Google