Adaptive Resonance Theory

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

Introduction to Neural Networks Computing
Marković Miljan 3139/2011
Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
An Illustrative Example.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
The back-propagation training algorithm
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Pertemuan 15 ADAPTIVE RESONANCE THEORY Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
An Illustrative Example
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
Before we start ADALINE
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
ART (Adaptive Resonance Theory)
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Lecture 09 Clustering-based Learning
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
Artificial Neural Networks
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Adaptive Resonance Theory
15 1 Grossberg Network Biological Motivation: Vision Eyeball and Retina.
Chapter 5. Adaptive Resonance Theory (ART) ART1: for binary patterns; ART2: for continuous patterns Motivations: Previous methods have the following problem:
Soft Computing Lecture 14 Clustering and model ART.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
UNSUPERVISED LEARNING NETWORKS
Version 0.10 (c) 2007 CELEST VISI  N BRIGHTNESS CONTRAST: ADVANCED MODELING CLASSROOM PRESENTATION.
Contents PCA GHA APEX Kernel PCA CS 476: Networks of Neural Computation, CSD, UOC, 2009 Conclusions WK9 – Principle Component Analysis CS 476: Networks.
1 Adaptive Resonance Theory. 2 INTRODUCTION Adaptive resonance theory (ART) was developed by Carpenter and Grossberg[1987a] ART refers to the class of.
381 Self Organization Map Learning without Examples.
Chapter 2 Single Layer Feedforward Networks
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
13 1 Associative Learning Simple Associative Network.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Neural networks.
Chapter 5 Unsupervised learning
Adaptive Resonance Theory (ART)
Chapter 2 Single Layer Feedforward Networks
Pertemuan 7 JARINGAN INSTAR DAN OUTSTAR
Counter propagation network (CPN) (§ 5.3)
Dr. Unnikrishnan P.C. Professor, EEE
Chap 3. The simplex method
Kohonen Self-organizing Feature Maps
Neuro-Computing Lecture 4 Radial Basis Function Network
Hopfield Network.
Associative Learning.
Grossberg Network.
Competitive Networks.
An Illustrative Example.
Clustering Techniques
Grossberg Network.
Competitive Networks.
Adaptive Resonance Theory
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Dr. Unnikrishnan P.C. Professor, EEE
Perceptron Learning Rule
An Illustrative Example.
Perceptron Learning Rule
Associative Learning.
Presentation transcript:

Adaptive Resonance Theory (ART)

Basic ART Architecture

ART Subsystems Layer 1 Normalization Comparison of input pattern and expectation L1-L2 Connections (Instars) Perform clustering operation. Each row of W1:2 is a prototype pattern. Layer 2 Competition, contrast enhancement L2-L1 Connections (Outstars) Expectation Perform pattern recall. Each column of W2:1 is a prototype pattern Orienting Subsystem Causes a reset when expectation does not match input Disables current winning neuron

Layer 1

(Comparison with Expectation) Layer 1 Operation Shunting Model Excitatory Input (Comparison with Expectation) Inhibitory Input (Gain Control)

Excitatory Input to Layer 1 Suppose that neuron j in Layer 2 has won the competition: W 2:1 a 2 w 1 ¼ j S = (jth column of W2:1) Therefore the excitatory input is the sum of the input pattern and the L2-L1 expectation:

Inhibitory Input to Layer 1 Gain Control W - 1 ¼ = The gain control will be one when Layer 2 is active (one neuron has won the competition), and zero when Layer 2 is inactive (all neurons having zero output).

Steady State Analysis: Case I Case I: Layer 2 inactive (each a2j = 0) In steady state: n i 1 b + p - = n i 1 – b + ( ) p = Therefore, if Layer 2 is inactive:

Steady State Analysis: Case II Case II: Layer 2 active (one a2j = 1) In steady state: We want Layer 1 to combine the input vector with the expectation from Layer 2, using a logical AND operation: n1i<0, if either w2:1i,j or pi is equal to zero. n1i>0, if both w2:1i,j or pi are equal to one. Therefore, if Layer 2 is active, and the biases satisfy these conditions:

Layer 1 Summary If Layer 2 is inactive (each a2j = 0) If Layer 2 is active (one a2j = 1)

Assume that Layer 2 is active, and neuron 2 won the competition. Layer 1 Example e = 1, +b1 = 1 and -b1 = 1.5 Assume that Layer 2 is active, and neuron 2 won the competition.

Example Response

Layer 2

Layer 2 Operation Shunting Model n b W f On-Center Feedback Adaptive Instars Excitatory Input Off-Surround Feedback n 2 t ( ) b - + W [ ] f – Inhibitory Input

Layer 2 Example (Faster than linear, winner-take-all)

Example Response t

Layer 2 Summary

Orienting Subsystem Purpose: Determine if there is a sufficient match between the L2-L1 expectation (a1) and the input pattern (p).

Orienting Subsystem Operation Excitatory Input W + p a ¼ j 1 = S å 2 Inhibitory Input W - a 1 b ¼ j t ( ) = S å 2 When the excitatory input is larger than the inhibitory input, the Orienting Subsystem will be driven on.

Steady State Operation n – b + ( ) a p 2 { } - 1 î þ í ý ì ü = 1 a p 2 b + ( ) n – - = n b + a p 2 ( ) - 1 – = Let n > if a 1 2 p - b < r = Vigilance RESET Since , a reset will occur when there is enough of a mismatch between p and .

Orienting Subsystem Example e = 0.1, a = 3, b = 4 (r = 0.75) t

Orienting Subsystem Summary 1 , if 2 p ¤ r < [ ] otherwise î ï í ì =

Learning Laws: L1-L2 and L2-L1 The ART1 network has two separate learning laws: one for the L1-L2 connections (instars) and one for the L2-L1 connections (outstars). Both sets of connections are updated at the same time - when the input and the expectation have an adequate match. The process of matching, and subsequent adaptation is referred to as resonance.

Subset/Superset Dilemma Suppose that so the prototypes are We say that 1w1:2 is a subset of 2w1:2, because 2w1:2 has a 1 wherever 1w1:2 has a 1. If the output of layer 1 is then the input to Layer 2 will be Both prototype vectors have the same inner product with a1, even though the first prototype is identical to a1 and the second prototype is not. This is called the Subset/Superset dilemma.

Subset/Superset Solution Normalize the prototype patterns. Now we have the desired result; the first prototype has the largest inner product with the input.

Instar Learning with Competition L1-L2 Learning Law Instar Learning with Competition where ¼ b + 1 = ¼ b - = W + 1 ¼ = ¼ W - 1 = Upper Limit Bias Lower Limit Bias On-Center Connections Off-Surround Connections When neuron i of Layer 2 is active, iw1:2 is moved in the direction of a1. The elements of iw1:2 compete, and therefore iw1:2 is normalized.

Fast Learning For fast learning we assume that the outputs of Layer 1 and Layer 2 remain constant until the weights reach steady state. Assume that a2i(t) = 1, and solve for the steady state weight: Case I: a1j = 1 Case II: a1j = 0 Summary w 1:2 i z a 1 2 – + - =

Learning Law: L2-L1 Outstar Fast Learning Assume that a2j(t) = 1, and solve for the steady state weight: or Column j of W2:1 converges to the output of Layer 1, which is a combination of the input pattern and the previous prototype pattern. The prototype pattern is modified to incorporate the current input pattern.

ART1 Algorithm Summary 0) All elements of the initial W2:1 matrix are set to 1. All elements of the initial W1:2 matrix are set to z/(z+S1-1). 1) Input pattern is presented. Since Layer 2 is not active, 2) The input to Layer 2 is computed, and the neuron with the largest input is activated. In case of a tie, the neuron with the smallest index is the winner. 3) The L2-L1 expectation is computed.

Summary Continued 4) Layer 1 output is adjusted to include the L2-L1 expectation. 5) The orienting subsystem determines match between the expectation and the input pattern. a 1 , if 2 p ¤ r < [ ] otherwise î ï í ì = 6) If a0 = 1, then set a2j = 0, inhibit it until resonance, and return to Step 1. If a0 = 0, then continue with Step 7. 7) Resonance has occured. Update row j of W1:2. w 1:2 j z a 1 2 – + - = 8) Update column j of W2:1. w j 2:1 a 1 = 9) Remove input, restore inhibited neurons, and return to Step 1.