Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.

Similar presentations


Presentation on theme: "Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal."— Presentation transcript:

1 Ming-Feng Yeh1 CHAPTER 13 Associative Learning

2 Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal to define correct network behavior. The unsupervised learning rules give networks the ability to learn associations between patterns that occur together frequently. Associative learning allows networks to perform useful tasks such as pattern recognition (instar) and recall (outstar).

3 Ming-Feng Yeh3 What is an Association? An association is any link between a system’s input and output such that when a pattern A is presented to the system it will respond with pattern B. When two patterns are link by an association, the input pattern is referred to as the stimulus and the output pattern is to referred to as the response.

4 Ming-Feng Yeh4 Classic Experiment Ivan Pavlov He trained a dog to salivate at the sound of a bell, by ringing the bell whenever food was presented. When the bell is repeatedly paired with the food, the dog is conditioned to salivate at the sound of the bell, even when no food is present. B. F. Skinner He trained a rat to press a bar in order to obtain a food pellet.

5 Ming-Feng Yeh5 Associative Learning Anderson and Kohonen independently developed the linear associator in the late 1960s and early 1970s. Grossberg introduced nonlinear continuous-time associative networks during the same time period.

6 Ming-Feng Yeh6 Simple Associative Network Single-Input Hard Limit Associator Restrict the value of p to be either 0 or 1, indicating whether a stimulus is absent or present. The output a indicates the presence or absence of the network’s response.

7 Ming-Feng Yeh7 Two Types of Inputs Unconditioned Stimulus Analogous to the food presented to the dog in Pavlov’s experiment. Conditioned Stimulus Analogous to the bell in Pavlov’s experiment. The dog salivates only when food is presented. This is an innate that does not have to be learned.

8 Ming-Feng Yeh8 Banana Associator An unconditioned stimulus (banana shape) and a conditioned stimulus (banana smell) The network is to associate the shape of a banana, but not the smell.

9 Ming-Feng Yeh9 Associative Learning Both animals and humans tend to associate things occur simultaneously. If a banana smell stimulus occurs simultaneously with a banana concept response (activated by some other stimulus such as the sight of a banana shape), the network should strengthen the connection between them so that later it can activate its banana concept in response to the banana smell alone.

10 Ming-Feng Yeh10 Unsupervised Hebb Rule Increasing the weighting w ij between a neuron’s input p j and output a i in proportion to their product: Hebb rule uses only signals available within the layer containing the weighting being updated.  Local learning rule Vector form: Learning is performed in response to the training sequence

11 Ming-Feng Yeh11 Ex: Banana Associator Initial weights: Training sequence: Learning rule: ShapeSmell Fruit Network Banana ? Smell Sight

12 Ming-Feng Yeh12 Ex: Banana Associator First iteration (sight fails): (no response) Second iteration (sight works): (banana)

13 Ming-Feng Yeh13 Ex: Banana Associator Third iteration (sight fails): (banana) From now on, the network is capable of responding to bananas that are detected either sight or smell. Even if both detection systems suffer intermittent faults, the network will be correct most of the time.

14 Ming-Feng Yeh14 Problems of Hebb Rule Weights will become arbitrarily large Synapses cannot grow without bound. There is no mechanism for weights to decrease If the inputs or outputs of a Hebb network experience ant noise, every weight will grow (however slowly) until the network responds to any stimulus.

15 Ming-Feng Yeh15 Hebb Rule with Decay , the decay rate, is a positive constant less than one. This keeps the weight matrix from growing without bound, which can be found by setting both a i and p j to 1, i.e., The maximum weight value is determined by the decay rate .

16 Ming-Feng Yeh16 Ex: Banana Associator First iteration (sight fails): no response Second iteration (sight works): banana Third iteration (sight fails): banana

17 Ming-Feng Yeh17 Ex: Banana Associator Hebb RuleHebb with Decay

18 Ming-Feng Yeh18 Prob. of Hebb Rule with Decay Associations will decay away if stimuli are not occasionally presented. If a i = 0, then If  = 0.1, this reduces to The weight decays by 10% at each iteration for which a i = 0 (no stimulus)

19 Ming-Feng Yeh19 Instar (Recognition Network) A neuron that has a vector input and a scalar output is referred to as an instar. This neuron is capable of pattern recognition. Instar is similar to perceptron, ADALINE and linear associator.

20 Ming-Feng Yeh20 Instar Operation Input-output expression: The instar is active when or where  is the angle between two vectors. If, the inner product is maximized when the angle  is 0. Assume that all input vectors have the same length (norm).

21 Ming-Feng Yeh21 Vector Recognition If, then the instar will be only active when  = 0. If, then the instar will be active for a range of angles. The larger the value of b, the more patterns there will be that can activate the instar, thus making it the less discriminatory.

22 Ming-Feng Yeh22 Instar Rule Hebb rule: Hebb rule with decay: Instar rule: a decay term, the forgetting problem, is add that is proportion to : If,

23 Ming-Feng Yeh23 Graphical Representation For the case where the instar is active ( ), For the case where the instar is inactive ( ),

24 Ming-Feng Yeh24 Ex: Orange Recognizer The elements of p will be contained to  1 values. Sight Fruit Network Orange ? Measure

25 Ming-Feng Yeh25 Initialization & Training Initial weights: The instar rule (  =1 ): Training sequence: First iteration:

26 Ming-Feng Yeh26 Second Training Iteration Second iteration: The network can now recognition the orange by its measurements.

27 Ming-Feng Yeh27 Third Training Iteration Third iteration: Orange will now be detected if either set of sensors works.

28 Ming-Feng Yeh28 Kohonen Rule Kohonen rule: Learning occurs when the neuron’s index i is a member of the set X(q). The Kohonen rule can be made equivalent to the instar rule by defining X(q) as the set of all i such that The Kohonen rule allows the weights of a neuron to learn an input vector and is therefore suitable for recognition applications.

29 Ming-Feng Yeh29 Ourstar (Recall Network) The outstar network has a scalar input and a vector output. It can perform pattern recall by associating a stimulus with a vector response.

30 Ming-Feng Yeh30 Outstar Operation Input-output expression: If we would like the outstar network to associate a stimulus (an input of 1) with a particular output vector a *, set W = a *. If p = 1, a = satlins(Wp) = satlins(a * p) = a * Hence, the pattern is correctly recalled. The column of a weight matrix represents the pattern to be recalled.

31 Ming-Feng Yeh31 Outstar Rule In instar rule, the weight decay term of Hebb rule is proportional to the output of network, a i. In outstar rule, the weight decay term of Hebb rule is proportional to the input of network, p j. If  = , Learning occurs whenever p j is nonzero (instead of a i ). When learning occurs, column w j moves toward the output vector. (complimentary to instar rule)

32 Ming-Feng Yeh32 Ex: Pineapple Recaller Any set of p 0 (with  1 values) will be copied to a. Sight Fruit Network Measurement? Measure

33 Ming-Feng Yeh33 Initialization The outstar rule (  =1 ): Training sequence: Pineapple measurements:

34 Ming-Feng Yeh34 First Training Iteration First iteration:

35 Ming-Feng Yeh35 Second Training Iteration Second iteration: The network forms an association between the sight and the measurements.

36 Ming-Feng Yeh36 Third Training Iteration Third iteration: Even if the measurement system fail, the network is now able to recall the measurements of the pineapple when it sees it.


Download ppt "Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal."

Similar presentations


Ads by Google