Hebbian Coincidence Learning

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Chapter3 Pattern Association & Associative Memory
Pattern Association.
Memristor in Learning Neural Networks
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
CS 678 –Boltzmann Machines1 Boltzmann Machine Relaxation net with visible and hidden units Learning algorithm Avoids local minima (and speeds up learning)
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
-Artificial Neural Network- Counter Propagation Network
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Simple Neural Nets For Pattern Classification
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Correlation Matrix Memory CS/CMPE 333 – Neural Networks.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
CHAPTER 3 Pattern Association.
Neural Networks Chapter 2 Joost N. Kok Universiteit Leiden.
December 7, 2010Neural Networks Lecture 21: Hopfield Network Convergence 1 The Hopfield Network The nodes of a Hopfield network can be updated synchronously.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Supervised Hebbian Learning
Chapter 6 Associative Models. Introduction Associating patterns which are –similar, –contrary, –in close proximity (spatial), –in close succession (temporal)
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
Recurrent Network InputsOutputs. Motivation Associative Memory Concept Time Series Processing – Forecasting of Time series – Classification Time series.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Focus on Unsupervised Learning.  No teacher specifying right answer.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
381 Self Organization Map Learning without Examples.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24,
Chapter 6 Neural Network.
Lecture 9 Model of Hopfield
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Computational Intelligence Winter Term 2015/16 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Lecture 39 Hopfield Network
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Self-Organizing Network Model (SOM) Session 11
Chapter 6 Associative Models
Ch7: Hopfield Neural Model
Real Neurons Cell structures Cell body Dendrites Axon
Capabilities of Threshold Neurons
Recurrent Networks A recurrent network is characterized by
Boltzmann Machine (BM) (§6.4)
Computational Intelligence
Ch6: AM and BAM 6.1 Introduction AM: Associative Memory
Unsupervised Networks Closely related to clustering
Computational Intelligence
CS623: Introduction to Computing with Neural Nets (lecture-11)
AI Lectures by Engr.Q.Zia
Presentation transcript:

Hebbian Coincidence Learning When one neuron contributes to the firing of another neuron the pathway between them is strengthened. That is, if the output of i is the input to j, then the weight is adjusted by a quantity proportional to c * (oi * oj).

Unsupervised Hebbian Learning The rule is ΔW = c * f(X,W) * X An example of unsupervised Hebbian learning is to simulate transfer of a response from a primary or unconditioned stimulus to a conditioned stimulus.

Example

Example (cont'd) In this example, the first three inputs represented the unconditioned stimuli and the second three inputs represent the new stimuli.

Supervised Hebbian Learning In supervised Hebbian learning, instead of using the output of a neuron, we use the desired output as supplied by the instructor. The rule becomes ΔW = c * D * X

Example Recognizing associations between sets of patterns: {<X1, Y1>, <X2, Y2>, ... <Xt, Yt>}. The input to the network would be pattern Xi and the output should be the associated pattern Yi. The network consists on an input layer with n neurons (where n is the number of different input patterns, and with an output layer of size m, where m is the number of output pattens. The network is fully connected.

Example (cont'd) In this example, the learning rule becomes: ΔW = c * Y * X, where Y * X is the outer vector product. We cycle through the pairs in the training set, adjusting the weights each time This kind of network (one the maps input vectors to output vectors using this rule) is called a linear associator.

Associative Memory Used for memory retrieval, returning one pattern given another. There are three types of associative memory Heteroassociative: Mapping from X to Y s.t. if an arbitrary vector is closer to Xi than to any other Xj, the vector Yi associated with Xi is returned. Autoassociative: Same as above except that Xi = Yi for all exemplar pairs. Useful in retrieving a full pattern from a degraded one.

Associative Memory (cont'd) Interpolative: If X differs from the exemplar Xi by an amount Δ, then the retrieved vector Y differs from Yi by some function of Δ. A linear associative network (one input layer, one output layer, fully connected) can be used to implement interpolative memory.

Representation of Vectors Hamming vectors are vectors composed of just the numbers +1 and -1. Assume all vectors are size n. The Hamming distance between two vectors is just the number of components which differ. An orthonormal set of vectors is a set of vectors where are all unit length and each pair of distinct vectors is orthogonal (the cross-product of the vectors is 0).

Properties of a LAN If the input set of vectors is orthonormal, then a linear associative network implements interpolative memory. The output is the weighted sum of the input vectors (we assume a trained network). If the input pattern matches one of the exemplars, Xi, then the output will be Yi. If the input pattern is Xi + Δi, then the output will be Yi + Φ(Δi) where Φ is the mapping function of the network.

Problems with LANs If the exemplars do not form an orthonormal set, then there may be interference between the stored patterns. This is know as crosstalk. The number of patterns which may be stored is limited by the dimensionality of the vector space. The mapping from real-life situations to orthonormal sets may not be clear.

Attractor Network Instead of return an interpolation, we may wish to return the vector associated with closest exemplar. We can create such a network (an attactor network) by using feedback instead of a strictly feed-foward network.

Feedback Network Feedback networks have the following properties: There are feedback connections between the nodes This is a time delay in signal, i.e., signal propagation is no instantaneous The output of the network depends on the network state upon convergence of the signals. Usefulness depends on convergence

Feedback Network (cont'd) A feedback network is initialized with an input pattern. The network then processes the input, passing signal between nodes, going through various states until it (hopefully) reaches equilibrium. The equilibrium state of the network supplies the output. Feedback networks can be used for heteroassociative and autoassociative memories.

Attractors An attractor is a state toward which other states in the region evolve in time. The region associated with an attractor is called a basin.

Bi-Directional Associative Memory A bi-directional associative memory (BAM) network is one with two fully connected layers, in which the links are all bi-directional. There can also be a feedback link connecting a node to itself. A BAM network may be trained, or its weights may be worked out in advance. It is used to map a set of vectors Xi (input layer) to a set of vectors Yi (output layer).

BAM for autoassociative memory If a BAM network is used to implement an autoassocative memory then the input layer is the same as the output layer, i.e., there is just one layer with feedback links connecting nodes to themselves in addition to the links between nodes. This network can be used to retrieve a pattern given a noisy or incomplete pattern.

BAM Processing Apply an initial vector pair (X,Y) to the processing elements. X is the pattern we wish to retrieve and Y is random. Propagate the information from the X layer to the Y layer and update the values at the Y layer. Send the information back to the X layer, updating those nodes. Continue until equilibrium is reached.

Hopfield Networks Two goals: Guarantee that the network converges to a stable state, no matter what input is given. The stable state should be the closest one to the input state according to some distance metric

Hopfield Network (cont'd) A Hopfield Network is identical in structure to an autoassociative BAM network – one layer of fully connected neurons. The activation function is +1, if net > Ti, xnew = xold, if net = Ti, -1, if net < Ti, where net = Σj wj * xj.

More on Hopfield Nets The are restrictions on the weights: wii = 0 for all i, and wij = wji for i.j. Usually the weights are calculated in advance, rather than having the net trained. The behavior of the net is characterized as an energy function, H(X) = - Σi Σj wij wi wj + 2 Σi Ti xi, decreases from every network transition.

Hopfield Nets Thus, the network must converge, and converge to a local energy minimum, but there is no guarantee that in converges to a state near the input state. Can be used for optimization problems such a TSP (map the cost function of the optimization problem to the energy function of the Hopfield net).