Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Perceptron Lecture 4.
Introduction to Neural Networks Computing
Perceptron Learning Rule
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Simple Neural Nets For Pattern Classification
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Correlation Matrix Memory CS/CMPE 333 – Neural Networks.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
An Illustrative Example
Associative Learning.
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Classical Conditioning: The Elements of Associative Learning
Lecture 09 Clustering-based Learning
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
-Artificial Neural Network- Chapter 3 Perceptron 朝陽科技大學 資訊管理系 李麗華教授.
Neuron Model and Network Architecture
LEARNING Psychology. DEFINITION Learning is defined: ◦_____________________________________________ _____________________________________________ _____________________________________________.
Supervised Hebbian Learning
Learning is a relatively permanent change in an organism’s behavior due to experience. Learning is more flexible in comparison to the genetically- programmed.
Classical Conditioning
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Unsupervised Learning
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
1 Back-Propagation. 2 Objectives A generalization of the LMS algorithm, called backpropagation, can be used to train multilayer networks. Backpropagation.
Hebbian Coincidence Learning
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Copyright Pearson Prentice Hall
Learning Experiments and Concepts.  What is learning?
Chapter  Years ago, biologists in Koshima, Japan, left sweet potatoes on a sandy beach to get the local Macaque monkeys out into the open  One.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Knowledge acquired in this way.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
SUPERVISED LEARNING NETWORK
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
EEE502 Pattern Recognition
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
Learning is defined as: A relatively permanent change in behavior due to experience.  Learning refers not just to the skills acquired in school, but also.
13 1 Associative Learning Simple Associative Network.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Conditioning By Andrew Hawes. Classical Conditioning Defined as a form of learning in which reflex responses are associated with a new stimuli. Pavlov’s.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Section 1: Classical Conditioning. Classical Conditioning- a person’s or animal’s old response becomes attached to a new stimulus An example of learning.
Pertemuan 7 JARINGAN INSTAR DAN OUTSTAR
Animal Behavior.
Associative Learning.
Competitive Networks.
Competitive Networks.
Introduction to Neural Network
Perceptron Learning Rule
Perceptron Learning Rule
Associative Learning.
Perceptron Learning Rule
Presentation transcript:

Ming-Feng Yeh1 CHAPTER 13 Associative Learning

Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal to define correct network behavior. The unsupervised learning rules give networks the ability to learn associations between patterns that occur together frequently. Associative learning allows networks to perform useful tasks such as pattern recognition (instar) and recall (outstar).

Ming-Feng Yeh3 What is an Association? An association is any link between a system’s input and output such that when a pattern A is presented to the system it will respond with pattern B. When two patterns are link by an association, the input pattern is referred to as the stimulus and the output pattern is to referred to as the response.

Ming-Feng Yeh4 Classic Experiment Ivan Pavlov He trained a dog to salivate at the sound of a bell, by ringing the bell whenever food was presented. When the bell is repeatedly paired with the food, the dog is conditioned to salivate at the sound of the bell, even when no food is present. B. F. Skinner He trained a rat to press a bar in order to obtain a food pellet.

Ming-Feng Yeh5 Associative Learning Anderson and Kohonen independently developed the linear associator in the late 1960s and early 1970s. Grossberg introduced nonlinear continuous-time associative networks during the same time period.

Ming-Feng Yeh6 Simple Associative Network Single-Input Hard Limit Associator Restrict the value of p to be either 0 or 1, indicating whether a stimulus is absent or present. The output a indicates the presence or absence of the network’s response.

Ming-Feng Yeh7 Two Types of Inputs Unconditioned Stimulus Analogous to the food presented to the dog in Pavlov’s experiment. Conditioned Stimulus Analogous to the bell in Pavlov’s experiment. The dog salivates only when food is presented. This is an innate that does not have to be learned.

Ming-Feng Yeh8 Banana Associator An unconditioned stimulus (banana shape) and a conditioned stimulus (banana smell) The network is to associate the shape of a banana, but not the smell.

Ming-Feng Yeh9 Associative Learning Both animals and humans tend to associate things occur simultaneously. If a banana smell stimulus occurs simultaneously with a banana concept response (activated by some other stimulus such as the sight of a banana shape), the network should strengthen the connection between them so that later it can activate its banana concept in response to the banana smell alone.

Ming-Feng Yeh10 Unsupervised Hebb Rule Increasing the weighting w ij between a neuron’s input p j and output a i in proportion to their product: Hebb rule uses only signals available within the layer containing the weighting being updated.  Local learning rule Vector form: Learning is performed in response to the training sequence

Ming-Feng Yeh11 Ex: Banana Associator Initial weights: Training sequence: Learning rule: ShapeSmell Fruit Network Banana ? Smell Sight

Ming-Feng Yeh12 Ex: Banana Associator First iteration (sight fails): (no response) Second iteration (sight works): (banana)

Ming-Feng Yeh13 Ex: Banana Associator Third iteration (sight fails): (banana) From now on, the network is capable of responding to bananas that are detected either sight or smell. Even if both detection systems suffer intermittent faults, the network will be correct most of the time.

Ming-Feng Yeh14 Problems of Hebb Rule Weights will become arbitrarily large Synapses cannot grow without bound. There is no mechanism for weights to decrease If the inputs or outputs of a Hebb network experience ant noise, every weight will grow (however slowly) until the network responds to any stimulus.

Ming-Feng Yeh15 Hebb Rule with Decay , the decay rate, is a positive constant less than one. This keeps the weight matrix from growing without bound, which can be found by setting both a i and p j to 1, i.e., The maximum weight value is determined by the decay rate .

Ming-Feng Yeh16 Ex: Banana Associator First iteration (sight fails): no response Second iteration (sight works): banana Third iteration (sight fails): banana

Ming-Feng Yeh17 Ex: Banana Associator Hebb RuleHebb with Decay

Ming-Feng Yeh18 Prob. of Hebb Rule with Decay Associations will decay away if stimuli are not occasionally presented. If a i = 0, then If  = 0.1, this reduces to The weight decays by 10% at each iteration for which a i = 0 (no stimulus)

Ming-Feng Yeh19 Instar (Recognition Network) A neuron that has a vector input and a scalar output is referred to as an instar. This neuron is capable of pattern recognition. Instar is similar to perceptron, ADALINE and linear associator.

Ming-Feng Yeh20 Instar Operation Input-output expression: The instar is active when or where  is the angle between two vectors. If, the inner product is maximized when the angle  is 0. Assume that all input vectors have the same length (norm).

Ming-Feng Yeh21 Vector Recognition If, then the instar will be only active when  = 0. If, then the instar will be active for a range of angles. The larger the value of b, the more patterns there will be that can activate the instar, thus making it the less discriminatory.

Ming-Feng Yeh22 Instar Rule Hebb rule: Hebb rule with decay: Instar rule: a decay term, the forgetting problem, is add that is proportion to : If,

Ming-Feng Yeh23 Graphical Representation For the case where the instar is active ( ), For the case where the instar is inactive ( ),

Ming-Feng Yeh24 Ex: Orange Recognizer The elements of p will be contained to  1 values. Sight Fruit Network Orange ? Measure

Ming-Feng Yeh25 Initialization & Training Initial weights: The instar rule (  =1 ): Training sequence: First iteration:

Ming-Feng Yeh26 Second Training Iteration Second iteration: The network can now recognition the orange by its measurements.

Ming-Feng Yeh27 Third Training Iteration Third iteration: Orange will now be detected if either set of sensors works.

Ming-Feng Yeh28 Kohonen Rule Kohonen rule: Learning occurs when the neuron’s index i is a member of the set X(q). The Kohonen rule can be made equivalent to the instar rule by defining X(q) as the set of all i such that The Kohonen rule allows the weights of a neuron to learn an input vector and is therefore suitable for recognition applications.

Ming-Feng Yeh29 Ourstar (Recall Network) The outstar network has a scalar input and a vector output. It can perform pattern recall by associating a stimulus with a vector response.

Ming-Feng Yeh30 Outstar Operation Input-output expression: If we would like the outstar network to associate a stimulus (an input of 1) with a particular output vector a *, set W = a *. If p = 1, a = satlins(Wp) = satlins(a * p) = a * Hence, the pattern is correctly recalled. The column of a weight matrix represents the pattern to be recalled.

Ming-Feng Yeh31 Outstar Rule In instar rule, the weight decay term of Hebb rule is proportional to the output of network, a i. In outstar rule, the weight decay term of Hebb rule is proportional to the input of network, p j. If  = , Learning occurs whenever p j is nonzero (instead of a i ). When learning occurs, column w j moves toward the output vector. (complimentary to instar rule)

Ming-Feng Yeh32 Ex: Pineapple Recaller Any set of p 0 (with  1 values) will be copied to a. Sight Fruit Network Measurement? Measure

Ming-Feng Yeh33 Initialization The outstar rule (  =1 ): Training sequence: Pineapple measurements:

Ming-Feng Yeh34 First Training Iteration First iteration:

Ming-Feng Yeh35 Second Training Iteration Second iteration: The network forms an association between the sight and the measurements.

Ming-Feng Yeh36 Third Training Iteration Third iteration: Even if the measurement system fail, the network is now able to recall the measurements of the pineapple when it sees it.