2 WIENER FILTERThis is an optimum filter that is based on the minimization of mean square error between the filter output and the a desired signalx(n)v(n)h(n)y(n)Interference due to noiseIdeally y(n) should be equal to xd(n)Filter impulse responsesd(n)Desired sequencee(n)Error signals(n)Assuming that h(n) is of length N, the mean square error is defined asBy taking the derivative of the mean square error and let it equal tozero, h(n) is solved by
3 WIENER FILTERWhen expressed in terms of the autocorrelation function andcrosscorrelation function isSince the x(n) is a summation of the true signal s(n) and noise w(n), and sd(n) is not correlated with the noise v(n), then
4 WIENER FILTER The matrix representation is The h(n) can be obtained by taking the inverse of the autocorrelation matrix and multiply with crosscorrelation matrix.Minimum mean square error is
5 WIENER FILTER EXAMPLE A signal is defined as follows where v(n) is additive white Gaussian noise with zero mean and variance 0.1.The Wiener filter length is 4.
6 WIENER FILTER EXAMPLE The matrix representation is By taking inverse matrix, the filter coefficients areMinimum means square error is
7 WIENER FILTER CONFIGURATION The various configuration of the Wiener filter is often referred as the linear estimation problem.sd(n)=s(n) filteringsd(n)=s(n+D) D>0 signal predictionsd(n)=s(n-D) D>0 signal smoothingThe materials presented will focus only on filtering and prediction.
8 WOLD REPRESENTATION H(z) V(n) X(n) White noise Random process H(z) - all pole, AR process- all zero, MA process- pole-zero, ARMA1/H(z)V(n)X(n)White noiseRandom process
9 AUTOREGRESSIVE PROCESS Difference equation for M th order AR process iswhere V(n) is white noise with zero mean and variance sv2. The autocorrelation function isSince RVV(m)= sv2d(m), the autocorrelation function for AR process
10 AUTOREGRESSIVE PROCESS Expanding the autocorrelation function results inThe matrix representationThe results is known as the Yule-Walker equations.
11 LINEAR PREDICTIONThe linear predictive filter is used to predict the model of the underlying random process.Desired sequenceh(n-1)X(n)Ideally Y(n) should be equal to Xd(n)Filter impulse responseXd(n)e(n)Error signalAssuming that h(n) is of length P, then one step forward predictor filter is defined as
12 LINEAR PREDICTION The forward prediction error is The mean square prediction error isBy taking the derivative of the mean square error and let it equal tozero, ap(n) is solved by
13 LINEAR PREDICTIONWhen expressed in terms of the autocorrelation function isThe minimum mean square error predictor error isCombining the above two equation results in the augmented normal equations where the solution for the coefficients are derived
14 LINEAR PREDICTIONThe matrix representation for the augmented normal equation iswhere ap(0)=1. For sample functions, the time averaged autocorrelation function is used. The solution is calculated by taking the inverse matrix.
15 LINEAR PREDICTION EXAMPLE A sample function x(n) is defined by as followsx(n)=[7.0718, , , , , 2.011, , 1.001, , ]The biased time-average autocorrelation function isRxx(l)=[24.39, , , 1.662, , , , 1.617, , ]For 4 th order predictor, the augmented normal equation is
16 LINEAR PREDICTION EXAMPLE The solution for the predictor filter isap(n)=[0.1104, , , ]The normalized solution isap(l)=[1, , , ]