Instructor :Dr. Aamer Iqbal Bhatti

Slides:



Advertisements
Similar presentations
Optimization.
Advertisements

Least squares CS1114
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
Sampling plans Given a domain, we can reduce the prediction error by good choice of the sampling points The choice of sampling locations is called “design.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Visual Recognition Tutorial
Numerical Optimization
Motion Analysis (contd.) Slides are from RPI Registration Class.
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
Advanced Topics in Optimization
Chapter 6 Numerical Interpolation

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
Computational Optimization
UNCONSTRAINED MULTIVARIABLE
Collaborative Filtering Matrix Factorization Approach
Yuan Chen Advisor: Professor Paul Cuff. Introduction Goal: Remove reverberation of far-end input from near –end input by forming an estimation of the.
Natural Gradient Works Efficiently in Learning S Amari (Fri) Computational Modeling of Intelligence Summarized by Joon Shik Kim.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Gradient Methods In Optimization
Variations on Backpropagation.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Neural Networks 2nd Edition Simon Haykin 柯博昌 Chap 3. Single-Layer Perceptrons.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Optimal Control.
Quiz 2.
Solving Linear Program by Simplex Method The Concept
Equations Quadratic in form factorable equations
Interpolation.
Pipelined Adaptive Filters
A second order ordinary differential equation has the general form
A Simple Artificial Neuron
Equalization in a wideband TDMA system
Chapter 14.
CHAPTER 3 RECURSIVE ESTIMATION FOR LINEAR MODELS
CS5321 Numerical Optimization
Collaborative Filtering Matrix Factorization Approach
لجنة الهندسة الكهربائية
Equalization in a wideband TDMA system
Optimization Part II G.Anuradha.
METHOD OF STEEPEST DESCENT
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Systems of Linear and Quadratic Equations
Numerical Analysis Lecture13.
6.5 Taylor Series Linearization
دانشگاه صنعتي اميركبير
6.6 The Marquardt Algorithm
The loss function, the normal equation,
Adaptive Filter A digital filter that automatically adjusts its coefficients to adapt input signal via an adaptive algorithm. Applications: Signal enhancement.
Mathematical Foundations of BME Reza Shadmehr
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Introduction to Scientific Computing II
Performance Optimization
Equations Quadratic in form factorable equations
16. Mean Square Estimation
What are optimization methods?
Nonlinear Conjugate Gradient Method for Supervised Training of MLP
Part 4 Nonlinear Programming
Presentation transcript:

Instructor :Dr. Aamer Iqbal Bhatti ADAPTIVE FILTERS I Instructor :Dr. Aamer Iqbal Bhatti Searching the Performance Surface CHAPTER # 4

Introduction Performance surface for the Adaptive Linear Combiner is quadratic for stationary signals In most of the applications parameters for the error surface are not known and we have to estimate them Problem is to devise algorithms that search the estimated performance surface.

Methods of Searching the Performance Surface Two descent algorithms for search of the optimal weight solution shall be followed a) Newton’s Method b) Method of Steepest Descent Newton’s method is fundamental root finding procedure in mathematics It is comparatively difficult to implement Weights are varied in each iteration

Methods of Searching the Performance Surface Changes in weight are always in the direction of the minimum of the performance surface Method of Steepest Decent is easy to implement In each iteration, weights are varied towards negative gradient of performance surface

Basic Ideas of Gradient Search Method We consider the simplest case where there is only one weight to be adjusted One weight surface is a parabola

Basic Ideas of Gradient Search Method Performance surface for single weight is represented by Problem is to find weight adjustment that causes the mean square error to be minimized

Methods of Searching the Performance Surface Iterative method to find the minimum value for the weight variable Start with an initial guess Measure the slope of the performance surface at this point Choose a new value equal to the initial guess plus an increment proportional to negative of the slope

Methods of Searching the Performance Surface Another new value is obtained and the above procedure is repeated until the minimum is achieved The values obtained by measuring the slope at discrete time points is called the “Gradient Estimate” Use of the negative of the gradient is necessary to proceed “Down Hill”

Gradient Search Algorithm and Solution For single variable the repetitive gradient search procedure can be represented as

Gradient Search Algorithm and Solution Gradient for the single weight is given by Transients and rate of convergence can be analyzed through substitution of the gradient

Gradient Search Algorithm and Solution Above equation is a constant coefficient linear difference equation It can be solved through induction from the first few iterations

Gradient Search Algorithm and Solution Generalized results for the kth iteration The result give the weight variable at any point in the search procedure and thus a solution to the gradient search algorithm

Stability and Rate of Convergence In the solution for the gradient search algorithm equation geometric ratio is given by Search equation is stable if and only if If the condition is met than the algorithm is seen to converge to the optimum solution

Stability and Rate of Convergence The figure depicts the gradient search for different values of r

Stability and Rate of Convergence The effect of the choice of u on r and on the weight iterative procedure are summarized in the following table

The Learning Curve Effect of variations in the adjustment of the weight on mean square error can be observed from For the continuous update of the weights the mean square error becomes

The Learning Curve Mean square error under goes a geometric progression The geometric ratio of the progression is given by This ratio can never be negative , the mean square progression shall never be oscillatory

The learning Curve For the single weight system the figure shows the relaxation of the mean square error from its initial value towards the optimal value For the learning curve for the value of r=0.5

The Learning Curve