18 1 Hopfield Network. 18 2 Hopfield Model 18 3 Equations of Operation n i - input voltage to the ith amplifier a i - output voltage of the ith amplifier.

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

Introduction to Neural Networks Computing
An Illustrative Example.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Linear Transformations
Eigenvalues and Eigenvectors
Performance Optimization
Simple Neural Nets For Pattern Classification
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Note 3 Model Order Reduction (1) Spring 2008 Prof. Chung-Kuan Cheng.
Ch 7.8: Repeated Eigenvalues
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Notes 3 Model Order Reduction (1) Spring 2008 Prof. Chung-Kuan Cheng.
UCSD CSE 245 Notes – SPRING 2006 CSE245: Computer-Aided Circuit Simulation and Verification Lecture Notes 3 Model Order Reduction (1) Spring 2006 Prof.
1 Pertemuan 17 HOPFIELD NETWORK Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
1 Pertemuan 18 HOPFIELD DESIGN Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Principle Component Analysis (PCA) Networks (§ 5.8) PCA: a statistical procedure –Reduce dimensionality of input vectors Too many features, some of them.
EXAMPLES: Example 1: Consider the system Calculate the equilibrium points for the system. Plot the phase portrait of the system. Solution: The equilibrium.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Biointelligence Laboratory, Seoul National University
Deep Learning – Fall 2013 Instructor: Bhiksha Raj Paper: T. D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network”,
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
Fitting a line to N data points – 1 If we use then a, b are not independent. To make a, b independent, compute: Then use: Intercept = optimally weighted.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
Unsupervised learning
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Variations on Backpropagation.
5.1 Eigenvectors and Eigenvalues 5. Eigenvalues and Eigenvectors.
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
17 1 Stability Recurrent Networks 17 3 Types of Stability Asymptotically Stable Stable in the Sense of Lyapunov Unstable A ball bearing, with dissipative.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
State Space Models The state space model represents a physical system as n first order differential equations. This form is better suited for computer.
STROUD Worked examples and exercises are in the text PROGRAMME 5 MATRICES.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Lecture 39 Hopfield Network
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Review of Matrix Operations
Matrices and vector spaces
Ch7: Hopfield Neural Model
Properties Of the Quadratic Performance Surface
Variations on Backpropagation.
Hopfield Network.
An Illustrative Example.
Adaptive Resonance Theory
Stability.
Linear Transformations
Adaptive Resonance Theory
Lecture 39 Hopfield Network
Variations on Backpropagation.
Performance Surfaces.
Performance Optimization
Section 3: Second Order Methods
Linear Transformations
Supervised Hebbian Learning
CS623: Introduction to Computing with Neural Nets (lecture-11)
Presentation transcript:

18 1 Hopfield Network

18 2 Hopfield Model

18 3 Equations of Operation n i - input voltage to the ith amplifier a i - output voltage of the ith amplifier C- amplifier input capacitance I i - fixed input current to the ith amplifier

18 4 Network Format Define: Vector Form:

18 5 Hopfield Network

18 6 Lyapunov Function V a  a T Wa – f 1– u  ud 0 a i       i1= S  b T a –+=

18 7 Individual Derivatives td d f 1– u  ud 0 a i       a i d d f 1– u  ud 0 a i       td da i f 1– a i  td i n i td i === d dt f 1– u  ud 0 a i       i1= S  n T d a dt = Third Term: Second Term: First Term:

18 8 Complete Lyapunov Derivative  – a i d d f 1– a i    td da i   2 i1= S  = From the system equations we know: So the derivative can be written: Ifthen a i d d f 1– a i  0 

18 9 Invariant Sets This will be zero only if the neuron outputs are not changing: Therefore, the system energy is not changing only at the equilibrium points of the circuit. Thus, all points in Z are potential attractors:

18 10 Example

18 11 Example Lyapunov Function V a  a T Wa –f 1– u  ud 0 a i       i1= S  b T a –+=

18 12 Example Network Equations

18 13 Lyapunov Function and Trajectory a 1 a 2 a 2 a 1 V(a)V(a)

18 14 Time Response tt a 1 a 2 V(a)V(a)

18 15 Convergence to a Saddle Point a 1 a 2

18 16 Hopfield Attractors V  a 1   V a 2   V... a S   V T 0 == The potential attractors of the Hopfield network satisfy: How are these points related to the minima of V(a)? The minima must satisfy: Where the Lyapunov function is given by: V a  a T Wa – f 1– u  ud 0 a i       i1= S  b T a –+=

18 17 Hopfield Attractors Using previous results, we can show that: The ith element of the gradient is therefore: a i   V a  – td dn i  – td d f 1– a i  ()  – a i d d f 1– a i  td da i === Since the transfer function and its inverse are monotonic increasing: All points for whichwill also satisfy Therefore all attractors will be stationary points of V(a).

18 Effect of Gain n a

18 19 Lyapunov Function V a  a T Wa – f 1– u  ud 0 a i       i1= S  b T a –+=  1.4=  0.14=  14= a f 1– u  ud 0 a i  2   ---  a i   cos   log 4   a i   coslog–== 4   a   coslog–

18 20 High Gain Lyapunov Function where As  the Lyapunov function reduces to: The high gain Lyapunov function is quadratic:

18 21 Example a 1 a 2 a 2 a 1 V(a)V(a)

18 22 Hopfield Design Choose the weight matrix W and the bias vector b so that V takes on the form of a function you want to minimize. The Hopfield network will minimize the following Lyapunov function:

18 23 Content-Addressable Memory Content-Addressable Memory - retrieves stored memories on the basis of part of the contents. Prototype Patterns: Proposed Performance Index: (bipolar vectors) For orthogonal prototypes, if we evaluate the performance index at a prototype: J(a) will be largest when a is not close to any prototype pattern, and smallest when a is equal to a prototype pattern.

18 24 Hebb Rule If we use the supervised Hebb rule to compute the weight matrix: the Lyapunov function will be: This can be rewritten: Therefore the Lyapunov function is equal to our performance index for the content addressable memory.

18 25 Hebb Rule Analysis If we apply prototype p j to the network: Therefore each prototype is an eigenvector, and they have a common eigenvalue of S. The eigenspace for the eigenvalue = S is therefore: An S-dimensional space of all vectors which can be written as linear combinations of the prototype vectors.

18 26 Weight Matrix Eigenspace The entire input space can be divided into two disjoint sets: where X  is the orthogonal complement of X. For vectors a in the orthogonal complement we have: Therefore, The eigenvalues of W are S and 0, with corresponding eigenspaces of X and X . For the Hessian matrix the eigenvalues are -S and 0, with the same eigenspaces.

18 27 Lyapunov Surface The high-gain Lyapunov function is a quadratic function. Therefore, the eigenvalues of the Hessian matrix determine its shape. Because the first eigenvalue is negative, V will have negative curvature in X. Because the second eigenvalue is zero, V will have zero curvature in X . Because V has negative curvature in X, the trajectories of the Hopfield network will tend to fall into the corners of the hypercube {a: -1 < a i < 1} that are contained in X.

18 28 Example

18 29 Zero Diagonal Elements We can zero the diagonal elements of the weight matrix: The prototypes remain eigenvectors of this new matrix, but the corresponding eigenvalue is now (S-Q): The elements of X  also remain eigenvectors of this new matrix, with a corresponding eigenvalue of (-Q): The Lyapunov surface will have negative curvature in X and positive curvature in X , in contrast with the original Lyapunov function, which had negative curvature in X and zero curvature in X .

18 30 Example If the initial condition falls exactly on the line a 1 = -a 2, and the weight matrix W is used, then the network output will remain constant. If the initial condition falls exactly on the line a 1 = -a 2, and the weight matrix W’ is used, then the network output will converge to the saddle point at the origin.