CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.

Slides:



Advertisements
Similar presentations
Chapter 9 Approximating Eigenvalues
Advertisements

Introduction to Neural Networks Computing
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Perceptron.
Widrow-Hoff Learning. Outline 1 Introduction 2 ADALINE Network 3 Mean Square Error 4 LMS Algorithm 5 Analysis of Converge 6 Adaptive Filtering.
Performance Optimization
Simple Neural Nets For Pattern Classification
x – independent variable (input)
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
Unconstrained Optimization Problem
Improved BP algorithms ( first order gradient method) 1.BP with momentum 2.Delta- bar- delta 3.Decoupled momentum 4.RProp 5.Adaptive BP 6.Trinary BP 7.BP.
Before we start ADALINE
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
Neural Networks Lecture 8: Two simple learning algorithms
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Supervised Hebbian Learning
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
Linear Discrimination Reading: Chapter 2 of textbook.
Non-Bayes classifiers. Linear discriminants, neural networks.
ADALINE (ADAptive LInear NEuron) Network and
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter 2 Single Layer Feedforward Networks
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
SUPERVISED LEARNING NETWORK
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Supervised learning network G.Anuradha. Learning objectives The basic networks in supervised learning Perceptron networks better than Hebb rule Single.
EEE502 Pattern Recognition
Variations on Backpropagation.
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Neural Networks 2nd Edition Simon Haykin 柯博昌 Chap 3. Single-Layer Perceptrons.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
10 1 Widrow-Hoff Learning (LMS Algorithm) ADALINE Network  w i w i1  w i2  w iR  =
Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Fall 2004 Backpropagation CS478 - Machine Learning.
One-layer neural networks Approximation problems
Classification with Perceptrons Reading:
Widrow-Hoff Learning (LMS Algorithm).
Biological and Artificial Neuron
Biological and Artificial Neuron
Variations on Backpropagation.
لجنة الهندسة الكهربائية
Outline Single neuron case: Nonlinear error correcting learning
Optimization Part II G.Anuradha.
Ch2: Adaline and Madaline
METHOD OF STEEPEST DESCENT
Biological and Artificial Neuron
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Chapter - 3 Single Layer Percetron
Variations on Backpropagation.
Performance Surfaces.
Performance Optimization
Presentation transcript:

CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh

Objectives Widrow-Hoff learning is an approximate steepest descent algorithm, in which the performance index is mean square error. It is widely used today in many signal processing applications. It is precursor to the backpropagation algorithm for multilayer networks. Ming-Feng Yeh

ADALINE Network ADALINE (Adaptive Linear Neuron) network and its learning rule, LMS (Least Mean Square) algorithm are proposed by Widrow and Marcian Hoff in 1960. Both ADALINE network and the perceptron suffer from the same inherent limitation: they can only solve linearly separable problems. The LMS algorithm minimizes mean square error (MSE), and therefore tires to move the decision boundaries as far from the training patterns as possible. Ming-Feng Yeh

ADALINE Network + + p a W n 1 b p a W n 1 b SR n = Wp + b a = purelin(Wp + b) + a S1 n 1 b W SR R p R1 S Single-layer perceptron Ming-Feng Yeh

Single ADALINE  Set n = 0, then Wp + b = 0 specifies a decision boundary. The ADALINE can be used to classify objects into two categories if they are linearly separable. Ming-Feng Yeh

Mean Square Error The LMS algorithm is an example of supervised training. The LMS algorithm will adjust the weights and biases of the ADALINE in order to minimize the mean square error, where the error is the difference between the target output (tq) and the network output (pq). MSE: E[·]: expected value Ming-Feng Yeh

Performance Optimization Develop algorithms to optimize a performance index F(x), where the word “optimize” will mean to find the value of x that minimizes F(x). The optimization algorithms are iterative as or : a search direction : positive learning rate, which determines the length of the step : initial guess Ming-Feng Yeh

Taylor Series Expansion Vector case: Ming-Feng Yeh

Gradient & Hessian Gradient: Hessian: Ming-Feng Yeh

Directional Derivative The ith element of the gradient, F(x)xi, is the first derivative of performance index F along the xi axis. Let p be a vector in the direction along which we wish to know the derivative. Directional derivative: . Find the derivative of F(x) at the point in the direction Ming-Feng Yeh

Steepest Descent Goal: The function F(x) can decrease at each iteration, i.e., Central idea: first-order Taylor series expansion Any vector pk that satisfies is called a descent direction. A vector that points in the steepest descent direction is Steepest descent: Ming-Feng Yeh

Approximated-Based Formulation Given input/output training data: {p1,t1}, {p2,t2},…, {pQ,tQ}. The objective of network training is to find the optimal weights to minimize the error (minimum-squares error) between the target value and the actual response. Model (network) function: Least-squares-error function: The weight vector x can be training by minimizing the error function along the gradient-descent direction: Ming-Feng Yeh

Delta Learning Rule ADALINE: Least-Squares-Error Criterion: minimize Gradient: Delta learning rule: Ming-Feng Yeh

Mean Square Error   Ming-Feng Yeh

Mean Square Error If the correlation matrix R is positive definite, there will be a unique stationary point , which will be a strong minimum. Strong Minimum: the point is a strong minimum of F(x) if a scalar exists, such that for all x such that . Global Minimum: the point is a unique global minimum of F(x) for all . Weak Minimum: the point is a weak minimum of F(x) if it is not a strong minimum, and a scalar exists, such that for all x such that . Ming-Feng Yeh

LMS Algorithm LMS algorithm is to locate the minimum point. Use an approximate steepest descent algorithm to estimate the gradient. Estimate the mean square error F(x) by Estimated gradient: Ming-Feng Yeh

LMS Algorithm     Ming-Feng Yeh

LMS Algorithm The steepest descent algorithm with constant learning rate  is   Matrix notation of LMS algorithm:  The LMS algorithm is also referred to as the delta rule or the Widrow-Hoff learning algorithm.  Ming-Feng Yeh

Quadratic Functions  General form of quadratic function: (A: Hessian matrix) If the eigenvalues of the Hessian matrix are all positive, then the quadratic function will have one unique global minimum.  ADALINE network mean square error:   Ming-Feng Yeh

Stable Learning Rates  Suppose that the performance index is a quadratic function:   Steepest descent algorithm with constant learning rate:  A linear dynamic system will be stable if the eigenvalues of the matrix [I-A] are less than one in magnitude.  Ming-Feng Yeh

Stable Learning Rates  Let {1, 2,…, n} and {z1,z2,…, zn} be the eigenvalues and eigenvectors of the Hessian matrix. Then  Condition for the stability of the steepest descent algorithm is then  Assume that the quadratic function has a strong minimum point, then its eigenvalues must be positive numbers. Hence,  This must be true for all eigenvalues:  Ming-Feng Yeh

Analysis of Convergence In the LMS algorithm , xk is a function only of z(k-1), z(k-2),…, z(0).  Assume that successive input vectors are statistically independent, then xk is independent of z(k).  The expected value of the weight vector will converge to . This is the minimum MSE solution.   The condition on stability is The steady state solution is or .  Ming-Feng Yeh

Orange/Apple Example  In practical applications, the stable learning rate  might NOT be practical to calculate R, and  could be selected by trial and error. Ming-Feng Yeh

Orange/Apple Example  Start, arbitrary, with all the weights set to zero, and then will apply input p1, p2, p1, p2, etc., in that order, calculating the new weights after each input is presented. Ming-Feng Yeh

Orange/Apple Example  This decision boundary falls halfway between the two reference patterns. The perceptron rule did NOT produce such a boundary,  The perceptron rule stops as soon as the patterns are correctly classified, even though some patterns may be close to the boundaries. The LMS algorithm minimizes the mean square error.  Ming-Feng Yeh

Solved Problem P10.2  Category I: Category II: Since they are linear separable, we can design an ADALINE network to make such a distinction. As shown in figure, Category III: Category IV:  They are NOT linear separable, so an ADALINE network CANNOT distinguish between them. Ming-Feng Yeh

Solved Problem P10.3 These patterns occur with equal probability, and they are used to train an ADALINE network with no bias. What does the MSE performance surface look like?  Ming-Feng Yeh

Solved Problem P10.3 1 2 3 -3 -2 -1 4 The Hessian matrix of F(x), 2R, has both eigenvalues at 2. So the contour of the performance surface will be circular. The center of the contours (the minimum point) is . Ming-Feng Yeh

Solved Problem P10.4 Train the network using the LMS algorithm, with the initial guess set to zero and a learning rate  = 0.25.  Ming-Feng Yeh

Tapped Delay Line D At the output of the tapped delay line we have an R-dim. vector, consisting of the input signal at the current time and at delays of from 1 to R–1 time steps. Ming-Feng Yeh

Adaptive Filter  D Ming-Feng Yeh

Solved Problem P10.1 D   Just prior to k = 0 ( k < 0 ): Three zeros have entered the filter, i.e., y(3) = y(2) = y(1) = 0, the output just prior to k = 0 is zero.  k = 0:  Ming-Feng Yeh

Solved Problem P10.1 k = 1:  k = 2:  k = 3:  k = 4:  Ming-Feng Yeh

Solved Problem P10.1   The effect of y(0) last from k = 0 through k = 2, so it will have an influence for three time intervals. This corresponds to the length of the impulse response of this filter.  Ming-Feng Yeh

Solved Problem P10.6 D +  Application of ADALINE: adaptive predictor The purpose of this filter is to predict the next value of the input signal from the two previous values. Suppose that the input signal is a stationary random process with autocorrelation function given by  D +  Ming-Feng Yeh

Solved Problem P10.6 Sketch the contour plot of the performance index (MSE). i. Ming-Feng Yeh

Solved Problem P10.6 Performance Index (MSE): The optimal weights are The Hessian matrix is Eigenvalues: 1 = 4, 2 = 8. Eigenvectors: The contours of F(x) will be elliptical, with the long axis of each ellipse along the 1st eigenvector, since the 1st eigenvalue has the smallest magnitude. The ellipses will be centered at . 1 2 -1 -2 Ming-Feng Yeh

Solved Problem P10.6 ii. The maximum stable value of the learning for the LMS algorithm: iii. The LMS algorithm is approximate steepest descent, so the trajectory for small learning rates will move perpendicular to the contour lines. 1 2 -1 -2 Ming-Feng Yeh

Applications Noise cancellation system to remove 60-Hz noise from EEG signal (Fig. 10.6) Echo cancellation system in long distance telephone lines (Fig. 10.10) Filtering engine noise from pilot’s voice signal (Fig. P10.8) Ming-Feng Yeh