Hopfield Network.

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

Introduction to Neural Networks Computing
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Linear Transformations
Eigenvalues and Eigenvectors
Performance Optimization
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Pertemuan 17 HOPFIELD NETWORK Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
18 1 Hopfield Network Hopfield Model 18 3 Equations of Operation n i - input voltage to the ith amplifier a i - output voltage of the ith amplifier.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
1 Pertemuan 18 HOPFIELD DESIGN Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
Principle Component Analysis (PCA) Networks (§ 5.8) PCA: a statistical procedure –Reduce dimensionality of input vectors Too many features, some of them.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Deep Learning – Fall 2013 Instructor: Bhiksha Raj Paper: T. D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network”,
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Variations on Backpropagation.
5.1 Eigenvectors and Eigenvalues 5. Eigenvalues and Eigenvectors.
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
17 1 Stability Recurrent Networks 17 3 Types of Stability Asymptotically Stable Stable in the Sense of Lyapunov Unstable A ball bearing, with dissipative.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Lecture 39 Hopfield Network
Eigenvalues, Zeros and Poles
E v a f i c o u g r á v i d a e, a o d a r à l u z u m m e n i n o, d i s s e : " O S e n h o r m e d e u u m f i l h o h o m e m ". E l a d e u à c.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Lesson 12 Maxwells’ Equations
Review of Matrix Operations
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
Eigenvalues and Eigenvectors
Ch7: Hopfield Neural Model
Pertemuan 7 JARINGAN INSTAR DAN OUTSTAR
Chapter 10 Optimal Control Homework 10 Consider again the control system as given before, described by Assuming the linear control law Determine the constants.
Widrow-Hoff Learning (LMS Algorithm).
Properties Of the Quadratic Performance Surface
Variations on Backpropagation.
Numerical Analysis Lecture 16.
Covariation Learning and Auto-Associative Memory
Associative Learning.
Competitive Networks.
An Illustrative Example.
Adaptive Resonance Theory
Stability.
Recurrent Networks A recurrent network is characterized by
Equivalent State Equations
Competitive Networks.
Linear Transformations
Quantum Two.
Lesson 12 Maxwells’ Equations
Adaptive Resonance Theory
Elementary Linear Algebra Anton & Rorres, 9th Edition
Variations on Backpropagation.
Performance Surfaces.
Performance Optimization
Eigenvalues and Eigenvectors
An Illustrative Example.
Section 3: Second Order Methods
Linear Transformations
Associative Learning.
Supervised Hebbian Learning
CS623: Introduction to Computing with Neural Nets (lecture-11)
Presentation transcript:

Hopfield Network

Hopfield Model

Equations of Operation ni - input voltage to the ith amplifier ai - output voltage of the ith amplifier C - amplifier input capacitance Ii - fixed input current to the ith amplifier

Network Format Define: Vector Form:

Hopfield Network

Lyapunov Function ò å a W b V ( ) 1 2 - – f u d î þ ï í ý ì ü + S T = i ò î þ ï í ý ì ü = S å b +

Individual Derivatives First Term: Second Term: t d f 1 – u ( ) a i ò î þ ï í ý ì ü da n = d t - f 1 – u ( ) a i ò î þ ï í ý ì ü = S å n T Third Term:

Complete Lyapunov Derivative From the system equations we know: So the derivative can be written: e – a i d f 1 ( ) [ ] è ø æ ö t da 2 = S å a i d f 1 – ( ) [ ] > If then

This will be zero only if the neuron outputs are not changing: Invariant Sets This will be zero only if the neuron outputs are not changing: Therefore, the system energy is not changing only at the equilibrium points of the circuit. Thus, all points in Z are potential attractors:

Example

Example Lyapunov Function ( ) 1 2 - T W – f u d i ò î þ ï í ý ì ü = S å b +

Example Network Equations

Lyapunov Function and Trajectory V(a) a 2 a 2 a 1 a 1

Time Response a 2 V(a) a 1 t t

Convergence to a Saddle Point 2 a 1

The potential attractors of the Hopfield network satisfy: Hopfield Attractors The potential attractors of the Hopfield network satisfy: How are these points related to the minima of V(a)? The minima must satisfy: V Ñ a 1 ¶ 2 ... S T = Where the Lyapunov function is given by: V a ( ) 1 2 - T W – f u d i ò î þ ï í ý ì ü = S å b +

Hopfield Attractors Using previous results, we can show that: The ith element of the gradient is therefore: a i ¶ V ( ) e – t d dn f 1 [ ] da = Since the transfer function and its inverse are monotonic increasing: All points for which will also satisfy Therefore all attractors will be stationary points of V(a).

Effect of Gain a n

Lyapunov Function å ò ò a W b a V ( ) 1 2 - – f u d î þ ï í ý ì ü + f i ò î þ ï í ý ì ü = S å b + f 1 – u ( ) d a i ò 2 g p - è ø æ ö cos log 4 = g 1.4 = 0.14 14 4 g p 2 - a è ø æ ö cos log – a

High Gain Lyapunov Function As g®¥ the Lyapunov function reduces to: The high gain Lyapunov function is quadratic: where

Example V(a) a 2 a 2 a 1 a 1

Hopfield Design The Hopfield network will minimize the following Lyapunov function: Choose the weight matrix W and the bias vector b so that V takes on the form of a function you want to minimize.

Content-Addressable Memory Content-Addressable Memory - retrieves stored memories on the basis of part of the contents. Prototype Patterns: (bipolar vectors) Proposed Performance Index: For orthogonal prototypes, if we evaluate the performance index at a prototype: J(a) will be largest when a is not close to any prototype pattern, and smallest when a is equal to a prototype pattern.

Hebb Rule If we use the supervised Hebb rule to compute the weight matrix: the Lyapunov function will be: This can be rewritten: Therefore the Lyapunov function is equal to our performance index for the content addressable memory.

Hebb Rule Analysis If we apply prototype pj to the network: Therefore each prototype is an eigenvector, and they have a common eigenvalue of S. The eigenspace for the eigenvalue l = S is therefore: An S-dimensional space of all vectors which can be written as linear combinations of the prototype vectors.

Weight Matrix Eigenspace The entire input space can be divided into two disjoint sets: where X^ is the orthogonal complement of X. For vectors a in the orthogonal complement we have: Therefore, The eigenvalues of W are S and 0, with corresponding eigenspaces of X and X^. For the Hessian matrix the eigenvalues are -S and 0, with the same eigenspaces.

Lyapunov Surface The high-gain Lyapunov function is a quadratic function. Therefore, the eigenvalues of the Hessian matrix determine its shape. Because the first eigenvalue is negative, V will have negative curvature in X. Because the second eigenvalue is zero, V will have zero curvature in X^. Because V has negative curvature in X , the trajectories of the Hopfield network will tend to fall into the corners of the hypercube {a: -1 < ai < 1} that are contained in X.

Example

Zero Diagonal Elements We can zero the diagonal elements of the weight matrix: The prototypes remain eigenvectors of this new matrix, but the corresponding eigenvalue is now (S-Q): The elements of X^ also remain eigenvectors of this new matrix, with a corresponding eigenvalue of (-Q): The Lyapunov surface will have negative curvature in X and positive curvature in X^ , in contrast with the original Lyapunov function, which had negative curvature in X and zero curvature in X^.

Example If the initial condition falls exactly on the line a1 = -a2, and the weight matrix W is used, then the network output will remain constant. If the initial condition falls exactly on the line a1 = -a2, and the weight matrix W’ is used, then the network output will converge to the saddle point at the origin.