ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.

Slides:



Advertisements
Similar presentations
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Periodograms Bartlett Windows Data Windowing Blackman-Tukey Resources:
Component Analysis (Review)
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
OPTIMUM FILTERING.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ELE Adaptive Signal Processing
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Course AE4-T40 Lecture 5: Control Apllication
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Adaptive Signal Processing
Linear Prediction Problem: Forward Prediction Backward Prediction
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Adaptive Noise Cancellation ANC W/O External Reference Adaptive Line Enhancement.
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Discrete-Time Fourier Series
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Algorithm Taxonomy Thus far we have focused on:
Properties of the z-Transform
Linear Prediction Coding (LPC)
EE3010 SaS, L7 1/19 Lecture 7: Linear Systems and Convolution Specific objectives for today: We’re looking at continuous time signals and systems Understand.
4. Linear optimal Filters and Predictors 윤영규 ADSLAB.
Linear Prediction Coding of Speech Signal Jun-Won Suh.
T – Biomedical Signal Processing Chapters
1 BIEN425 – Lecture 8 By the end of the lecture, you should be able to: –Compute cross- /auto-correlation using matrix multiplication –Compute cross- /auto-correlation.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
1 Linear Prediction. 2 Linear Prediction (Introduction) : The object of linear prediction is to estimate the output sequence from a linear combination.
1 Linear Prediction. Outline Windowing LPC Introduction to Vocoders Excitation modeling  Pitch Detection.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Signal and Noise Models SNIR Maximization Least-Squares Minimization MMSE.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
Speech Signal Representations I Seminar Speech Recognition 2002 F.R. Verhage.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Fourier Analysis of Discrete-Time Systems
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method.
ECE 5525 Osama Saraireh Fall 2005 Dr. Veton Kepuska
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
EE513 Audio Signals and Systems
Motivation Thus far we have dealt primarily with the input/output characteristics of linear systems. State variable, or state space, representations describe.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
Introduction to Digital Signals
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Demultiplexing and Demodulation Superheterodyne Receivers Review Resources: Wiki:
Discriminant Analysis
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Stability Response to a Sinusoid Filtering White Noise Autocorrelation Power.
Dept. E.E./ESAT-STADIUS, KU Leuven
Autoregressive (AR) Spectral Estimation
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Eigenfunctions Fourier Series of CT Signals Trigonometric Fourier.
Lecture 12: Parametric Signal Modeling XILIANG LUO 2014/11 1.
Linear Constant-Coefficient Difference Equations
ELG5377 Adaptive Signal Processing Lecture 15: Recursive Least Squares (RLS) Algorithm.
By Sarita Jondhale 1 Signal preprocessor: “conditions” the speech signal s(n) to new form which is more suitable for the analysis Postprocessor: operate.
Impulse Response Measurement and Equalization Digital Signal Processing LPP Erasmus Program Aveiro 2012 Digital Signal Processing LPP Erasmus Program Aveiro.
Signals and Systems Lecture #6 EE3010_Lecture6Al-Dhaifallah_Term3321.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Eigenfunctions Fourier Series of CT Signals Trigonometric Fourier Series Dirichlet.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 14: Applications of Filters.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
ELG5377 Adaptive Signal Processing Lecture 13: Method of Least Squares.
Linear Constant-Coefficient Difference Equations
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
LECTURE 30: SYSTEM ANALYSIS USING THE TRANSFER FUNCTION
ELG5377 Adaptive Signal Processing
Adaptive Filters Common filter design methods assume that the characteristics of the signal remain constant in time. However, when the signal characteristics.
Linear Prediction.
Modern Spectral Estimation
Linear Predictive Coding Methods
Linear Prediction.
16. Mean Square Estimation
Presentation transcript:

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations Error Energy Normalized Mean-Squared Error Resources: ECE 8463: Lectures 15 and 16 Markel and Gray: Linear Prediction Deller: DT Processing of Speech Wiki: Linear Prediction ECE 8463: Lectures 15 and 16 Markel and Gray: Linear Prediction Deller: DT Processing of Speech Wiki: Linear Prediction URL:.../publications/courses/ece_8423/lectures/current/lecture_03.ppt.../publications/courses/ece_8423/lectures/current/lecture_03.ppt MP3:.../publications/courses/ece_8423/lectures/current/lecture_03.mp3.../publications/courses/ece_8423/lectures/current/lecture_03.mp3 LECTURE 03: OPTIMAL LEAST SQUARES FILTER DESIGN

ECE 8423: Lecture 03, Slide 1 An input signal,, is filtered using a linear filter with impulse response,, in such a way that the output,, is as close as possible to some desired signal,. The performance is measured in terms of the energy of error,. The Filtering Problem + – We can define an objective function: We can write an expression for the error: We can differentiate with respect to each coefficient, : Substituting our expression for the error:

ECE 8423: Lecture 03, Slide 2 The Normal Equations Combining results: Equating to zero gives a set of linear algebraic equations known as the normal equations:

ECE 8423: Lecture 03, Slide 3 The Orthogonality Principle The term normal equations arises from the orthogonality of the input signal and the error, which results from equating the derivatives of J to zero: It is also common that we “debias”, or remove the DC value, of the input,, such that: which implies and are uncorrelated. The error and output are also uncorrelated: Minimization of a quadratic form produces a single, global, minimum. One way to verify this would be to examine the second derivative. In addition, the minimization of a quadratic form using a linear filter guarantees linear equations will result. This is attractive because it produces a computationally efficient solution. We will soon see that such solutions can be derived using basic principles of linear algebra.

ECE 8423: Lecture 03, Slide 4 Autocorrelation and Autocovariance Solutions For stationary inputs, we can convert the correlation to a traditional autocorrelation: We can also convert : The normal equations reduce to: The solution to this equation is known as the autocorrelation solution. This equation can be written in matrix form using an autocorrelation matrix that is Toeplitz and is very stable. An alternate form of the solution exists if we use the original normal equation: This is known as the autocovariance solution because the matrix form of this equation involves a covariance matrix. We now need to consider the limits on these summations. In a traditional, frame-based approach to signal processing, we have a finite amount of data with which to estimate these functions. In some implementations, data from previous and future frames are used to better estimate these functions at the boundaries of the analysis window.

ECE 8423: Lecture 03, Slide 5 Solutions of the Normal Equations If we consider the filter to be an infinite length, two-sided (acausal) filter: We recognize the term on the left as a convolution, and can apply a z- Transform to compute the filter as a ratio of z-Transforms: If we consider the filter to be of finite length, L: We can define the filter as a vector of coefficients: We can define a data vector: The convolution can be written as a dot product: The gradient of J can be written as, and.

ECE 8423: Lecture 03, Slide 6 Computation of the Least-Squares Solution The correlation functions can be estimated in many ways. In typical applications, the data is presented using a temporal windowing approach in which we use an overlapping window. The most popular method for computing the autocorrelation function is: Other common forms are: Correlation matrices are in general well-behaved (e.g., semi-positive definite), but matrix inversion is computationally costly. Fortunately, highly efficient recursions exist to solve these equations. The covariance matrix can be efficiently inverted using the Cholesky decomposition. The autocorrelation method can be solved recursively using the Levinson recursion.

ECE 8423: Lecture 03, Slide 7 Error Energy The error energy is a measure of the performance of the filter. The minimum error is achieved when, which simplifies to: We can define an error vector: We can derive an expression for the error in terms of the minimum error: This shows that the minimum error is achieved by because, a consequence of the autocorrelation matrix being positive semi-definite. This also shows that the error energy is a monotonically non-increasing function of L, the length of the filter, because. This last relation is important because it demonstrates that the accuracy of the model increases as we increase the order of the filter. However, we also risk overfitting the data in the case of noisy data.

ECE 8423: Lecture 03, Slide 8 Normalized Mean-Square Error and Performance We can define a normalized version of the error by dividing by the variance of the desired signal, : This bounds the normalized error: We can define a performance measure in terms of this normalized error: P is also bounded:

ECE 8423: Lecture 03, Slide 9 We have introduced the concept of linear prediction in a generalized sense using an adaptive filtering scenario. We derived equations to estimate the coefficients of the model, and to evaluate the error. Issue: what happens if our desired signal is white noise? Next: we will reformulate linear prediction for the specific case of a time series using delayed samples of itself, and view linear prediction as a least- squares digital filter. Summary + –