Presentation is loading. Please wait.

Presentation is loading. Please wait.

R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24, 2014 1.

Similar presentations


Presentation on theme: "R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24, 2014 1."— Presentation transcript:

1 R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24, 2014 1

2 I NTRODUCTION In a network, the signal received at the output was sent again to the network input. Such circulation of the signal is called feedback. Such neural networks are are called recurrent neural networks. Recurrent neural networks: – Hopfield neural network – Hamming neural network – Real Time Recurrent Network (RTRN) – Elman neural network – Bidirectional Associative Memory (BAM) 2

3 HOPFIELD NEURAL NETWORK 3

4 4  w 11 w 12 w1nw1n w 21 w 22 w2nw2n wn1wn1 wn2wn2 w nn z -1 y1y1 y2y2 ynyn......... w 10 w 20 wn0wn0 1 1 1

5 H OPFIED S TRUCTURE It is a one-layer network with a regular structure, made of many neurons connected one to the other. Output is fedback after one-step time delay. There are no feedbacks in the same neuron. During learning weights w kj get modified depending on the value of learning vector x. In retrieval mode, the input signal stimulates the network which, through the feedback, repeatedly receives the output signal at its input, until the answer is stabilized. 5

6 6    w 11 w 12 w1nw1n w 21 w 22 w2nw2n wn1wn1 wn2wn2 w nn z -1 y1y1 y2y2 ynyn......... w 10 w 20 wn0wn0 1 1 1

7 L EARNING M ETHODS FOR H OPFIELD : H EBB The generalized Hebbian rule for M leaning vectors of the form The maximum number of patterns which the network is able to memorize using this rule is only 13.8% of the number of neurons. 7

8 L EARNING M ETHODS FOR H OPFIELD : P SEUDOINVERSE for M leaning vectors of the form Form a matrix of learning vectors The n  n weights matrix W is found as In Matlab – W = X*pinv(X) 8

9 HAMMING NEURAL NETWORK 9

10 10

11 O PERATION OF THE H AMMING NN In the first layer, there are p neurons, which determine the Hamming distance between the input vector and each of the p desired vectors coded in the weights of this layer. The second layer is called MAXNET. It is a layer corresponding to the Hopfield network. However, in this layer feedback covering the same neuron are added. The weights in these feedbacks are equal to 1. The values of weights of other neurons of this layer are selected so that they inhibit the process. Thus, in the MAXNET layer there is the extinction of all outputs except the one which was the strongest in the first layer. The neuron of this layer, which is identified with the winner, through the weights of output neurons with a linear activation function, will retrieve the output vector associated with the vector coded in the first layer. 11


Download ppt "R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24, 2014 1."

Similar presentations


Ads by Google