Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hopfield Networks Presentation By Utkarsh Trivedi Y8544.

Similar presentations

Presentation on theme: "Hopfield Networks Presentation By Utkarsh Trivedi Y8544."— Presentation transcript:

1 Hopfield Networks Presentation By Utkarsh Trivedi Y8544

2 Topics Covered What is Hopfield Network Some interesting facts Major Applications Mathematical model of HN’s Learning HNs through examples Hopfield Network 2

3 What is Hopfield Network ?? According to Wikipedia, Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems with binary threshold units. They are guaranteed to converge to a local minimum, but convergence to one of the stored patterns is not guaranteed. Hopfield Network 3

4 What are HN (informally) Hopfield Network 4 These are single layered recurrent networks All the neurons in the network are fedback from all other neurons in the network The states of neuron is either +1 or -1 instead of (1 and 0) in order to work correctly. No of the input nodes should always be equal to no of output nodes Following figure shows a Hopfield network with four nodes

5 Some Interesting Facts …. The recall pattern of Hopfield network is similar to our recall mechanism. Both are based on content addressable memory If some of the neurons of network are destroyed the performance is degraded but some network capabilities may be retained even with major network damage. Just like our brains Hopfield Network 5 Did you know that we are similar

6 Major Applications Recalling or Reconstructing corrupted patterns Large-scale computational intelligence systems Handwriting Recognition Software Practical applications of HNs are limited because number of training patterns can be at most about 14% the number of nodes in the network. If the network is overloaded -- trained with more than the maximum acceptable number of attractors -- then it won't converge to clearly defined attractors. Hopfield Network 6

7 Mathematical Modeling of HN’s Hopfield Network 7

8 8 Mathematical Modeling of HN’s

9 Consider signum function to be neuron’s activation function. i.e. v i = +1 if h i > 0 v i = -1 if hi< 0 Hopfield Network 9 Mathematical Modeling of HN’s

10 Hopfield Network 10 Liapunov Energy function :-

11 Power Of Hopfield Networks Hopfield Network 11 We want to understand how to achieve this kind of performance form simple Hopfield networks

12 Learning HNs through simple example Hopfield Network 12 OaOa ObOb OcOc W3,2 W1,2W2,1 W3,1W1,3 W3,3 W2,2 W2,3 W1,1 There are various ways to train these kinds of networks like back propagation algorithm, recurrent learning algorithm, genetic algorithm but there is one very simple algorithm to train these simple networks called ‘One shot method’. I will be using this algorithm in order to train the network.

13 Learning HNs through simple example Lets train this network for following patterns Pattern 1:- ie O a(1) =-1,O b(1) =-1,O c(1) =1 Pattern 2:- ie O a(2) =1,O b(2) =-1,O c(3) =-1 Pattern 3:- ie O a(3) =-1,O b(3) =1,O c(3) =1 Hopfield Network 13 w1,1 = 0 w1,2 = OA(1) × OB(1) + OA(2) × OB(2) + OA(3) × OB(3) = (-1) × (-1) + 1 × (-1) + (-1) × 1 = 1 w1,3 = OA(1) × OC(1) + OA(2) × OC(2) + OA(3) × OC(3) = (-1) × × (-1) + (-1) × 1 = -3 w2,2 = 0 w2,1 = OB(1) × OA(1) + OB(2) × OA(2) + OB(3) × OA(3) = (-1) × (-1) + (-1) × × (-1) = -1 w2,3 = OB(1) × OC(1) + OB(2) × OC(2) + OB(3) × OC(3) = (-1) × 1 + (-1) × (-1) + 1 × 1 = 1 w3,3 = 0 w3,1 = OC(1) × OA(1) + OC(2) × OA(2) + OC(3) × OA(3) = 1 × (-1) + (-1) × × (-1) = -3 w3,2 = OC(1) × OB(1) + OC(2) × OB(2) + OC(3) × OB(3) = 1 × (-1) + (-1) × (-1) + 1 × 1 = 1

14 Learning HNs through example Moving onto little more complex problem described in Haykin’s Neural Network Book They book used N=120 neuron and trained network with 120 pixel images where each pixel was represented by one neuron. Following 8 patterns were used to train neural network. Hopfield Network 14

15 Learning HNs through example In order to recognizing power of HNs For this they need corrupted image. They flipped the value of each pixel with p=0.25. Using these corrupted images trained HN was run. And after certain number of iteration the output images converged to one of the learned pattern. Next slides shows the results that they obtained Hopfield Network 15

16 Learning HNs through example Hopfield Network 16

17 Learning HNs through example Hopfield Network 17

18 Flow Chart summarizing overall process Network returns the decrypted pattern Run the trained network with corrupted pattern Update weight vectors of Network Train HN using Standard patterns Hopfield Network 18

19 Shortcomings of HNs Training patterns can be at most about 14% the number of nodes in the network. If more patterns are used then  the stored patterns become unstable;  spurious stable states appear (i.e., stable states which do not correspond with stored patterns). Can sometimes misinterpret the corrupted pattern. Hopfield Network 19

20 Shortcomings of HNs Hopfield Network 20

21 References Zurada :- Introduction to Artificial Neural Systems Haykins :- Neural Networks, A Comprehensive Foundation J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities",1982 R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 Wikipedia :- Hopfield Network 21

22 Thank you ….. Hopfield Network 22

23 Questions… What is the major difference between HN and fully recurrent networks? What is content addressable memory and how is it different from RAM? What will happen if we train HN for only one pattern? If we train a HN with a pattern will it be automatically trained for its inverse ? Why can’t we increase number of nodes in network in order to overcome its shortcomings? (ignore the increased computation complexity or time) Hopfield Network 23

Download ppt "Hopfield Networks Presentation By Utkarsh Trivedi Y8544."

Similar presentations

Ads by Google