5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Perceptron Lecture 4.
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
X0 xn w0 wn o Threshold units SOM.
Simple Neural Nets For Pattern Classification
Introduction to Machine Learning
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.

November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
Radial-Basis Function Networks
Radial Basis Function Networks
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Learning Processes.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Artificial Neural Networks
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Lecture 22 Clustering (3).
Competitive Networks.
Competitive Networks.
The Naïve Bayes (NB) Classifier
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised Hebbian learning algorithm Competitive learning Competitive learning Self-organising computational map: Self-organising computational map: Kohonen network Summary Summary Lecture 8 Artificial neural networks: Unsupervised learning

5/16/2015Intelligent Systems and Soft Computing2 The main property of a neural network is an ability to learn from its environment, and to improve its performance through learning. So far we have considered supervised or active learning  learning with an external “teacher” or a supervisor who presents a training set to the network. But another type of learning also exists: unsupervised learning. Introduction

5/16/2015Intelligent Systems and Soft Computing3 In contrast to supervised learning, unsupervised or self-organised learning does not require an external teacher. During the training session, the neural network receives a number of different input patterns, discovers significant features in these patterns and learns how to classify input data into appropriate categories. Unsupervised learning tends to follow the neuro-biological organisation of the brain. In contrast to supervised learning, unsupervised or self-organised learning does not require an external teacher. During the training session, the neural network receives a number of different input patterns, discovers significant features in these patterns and learns how to classify input data into appropriate categories. Unsupervised learning tends to follow the neuro-biological organisation of the brain. Unsupervised learning algorithms aim to learn rapidly and can be used in real-time. Unsupervised learning algorithms aim to learn rapidly and can be used in real-time.

5/16/2015Intelligent Systems and Soft Computing4 In 1949, Donald Hebb proposed one of the key ideas in biological learning, commonly known as Hebb’s Law. Hebb’s Law states that if neuron i is near enough to excite neuron j and repeatedly participates in its activation, the synaptic connection between these two neurons is strengthened and neuron j becomes more sensitive to stimuli from neuron i. Hebbian learning

5/16/2015Intelligent Systems and Soft Computing5 Hebb’s Law can be represented in the form of two rules: Hebb’s Law can be represented in the form of two rules: 1.If two neurons on either side of a connection are activated synchronously, then the weight of that connection is increased. 2.If two neurons on either side of a connection are activated asynchronously, then the weight of that connection is decreased. Hebb’s Law provides the basis for learning without a teacher. Learning here is a local phenomenon occurring without feedback from the environment. Hebb’s Law provides the basis for learning without a teacher. Learning here is a local phenomenon occurring without feedback from the environment.

5/16/2015Intelligent Systems and Soft Computing6 Hebbian learning in a neural network I n p u t S i g n a l s O u t p u t S i g n a l s

5/16/2015Intelligent Systems and Soft Computing7 Using Hebb’s Law we can express the adjustment applied to the weight w ij at iteration p in the following form: Using Hebb’s Law we can express the adjustment applied to the weight w ij at iteration p in the following form: As a special case, we can represent Hebb’s Law as follows: As a special case, we can represent Hebb’s Law as follows: where  is the learning rate parameter. This equation is referred to as the activity product rule. where  is the learning rate parameter. This equation is referred to as the activity product rule.

5/16/2015Intelligent Systems and Soft Computing8 Hebbian learning implies that weights can only increase. To resolve this problem, we might impose a limit on the growth of synaptic weights. It can be done by introducing a non-linear forgetting factor into Hebb’s Law: Hebbian learning implies that weights can only increase. To resolve this problem, we might impose a limit on the growth of synaptic weights. It can be done by introducing a non-linear forgetting factor into Hebb’s Law: where  is the forgetting factor. where  is the forgetting factor. Forgetting factor usually falls in the interval between 0 and 1, typically between 0.01 and 0.1, to allow only a little “forgetting” while limiting the weight growth. Forgetting factor usually falls in the interval between 0 and 1, typically between 0.01 and 0.1, to allow only a little “forgetting” while limiting the weight growth.

5/16/2015Intelligent Systems and Soft Computing9 Step 1: Initialisation. Set initial synaptic weights and thresholds to small random values, say in an interval [0, 1 ]. Step 2: Activation. Compute the neuron output at iteration p where n is the number of neuron inputs, and  j is the threshold value of neuron j. Hebbian learning algorithm

5/16/2015Intelligent Systems and Soft Computing10 Step 3: Learning. Update the weights in the network: where  w ij (p) is the weight correction at iteration p. where  w ij (p) is the weight correction at iteration p. The weight correction is determined by the generalised activity product rule: The weight correction is determined by the generalised activity product rule: Step 4: Iteration. Increase iteration p by one, go back to Step 2.

5/16/2015Intelligent Systems and Soft Computing11 To illustrate Hebbian learning, consider a fully connected feedforward network with a single layer of five computation neurons. Each neuron is represented by a McCulloch and Pitts model with the sign activation function. The network is trained on the following set of input vectors: Hebbian learning example

5/16/2015Intelligent Systems and Soft Computing12 Initial and final states of the network

5/16/2015Intelligent Systems and Soft Computing13 Initial and final weight matrices

5/16/2015Intelligent Systems and Soft Computing14 A test input vector, or probe, is defined as A test input vector, or probe, is defined as When this probe is presented to the network, we obtain: When this probe is presented to the network, we obtain:

5/16/2015Intelligent Systems and Soft Computing15 In competitive learning, neurons compete among themselves to be activated. In competitive learning, neurons compete among themselves to be activated. While in Hebbian learning, several output neurons can be activated simultaneously, in competitive learning, only a single output neuron is active at any time. While in Hebbian learning, several output neurons can be activated simultaneously, in competitive learning, only a single output neuron is active at any time. The output neuron that wins the “competition” is called the winner-takes-all neuron. The output neuron that wins the “competition” is called the winner-takes-all neuron. Competitive learning

5/16/2015Intelligent Systems and Soft Computing16 The basic idea of competitive learning was introduced in the early 1970s. The basic idea of competitive learning was introduced in the early 1970s. In the late 1980s, Teuvo Kohonen introduced a special class of artificial neural networks called self-organizing feature maps. These maps are based on competitive learning. In the late 1980s, Teuvo Kohonen introduced a special class of artificial neural networks called self-organizing feature maps. These maps are based on competitive learning.

5/16/2015Intelligent Systems and Soft Computing17 Our brain is dominated by the cerebral cortex, a very complex structure of billions of neurons and hundreds of billions of synapses. The cortex includes areas that are responsible for different human activities (motor, visual, auditory, somatosensory, etc.), and associated with different sensory inputs. We can say that each sensory input is mapped into a corresponding area of the cerebral cortex. The cortex is a self-organizing computational map in the human brain. What is a self-organizing feature map?

5/16/2015Intelligent Systems and Soft Computing18 Feature-mapping Kohonen model

5/16/2015Intelligent Systems and Soft Computing19 The Kohonen network The Kohonen model provides a topological mapping. It places a fixed number of input patterns from the input layer into a higher- dimensional output or Kohonen layer. The Kohonen model provides a topological mapping. It places a fixed number of input patterns from the input layer into a higher- dimensional output or Kohonen layer. Training in the Kohonen network begins with the winner’s neighborhood of a fairly large size. Then, as training proceeds, the neighborhood size gradually decreases. Training in the Kohonen network begins with the winner’s neighborhood of a fairly large size. Then, as training proceeds, the neighborhood size gradually decreases.

5/16/2015Intelligent Systems and Soft Computing20 Architecture of the Kohonen Network

5/16/2015Intelligent Systems and Soft Computing21 The lateral connections are used to create a competition between neurons. The neuron with the largest activation level among all neurons in the output layer becomes the winner. This neuron is the only neuron that produces an output signal. The activity of all other neurons is suppressed in the competition. The lateral connections are used to create a competition between neurons. The neuron with the largest activation level among all neurons in the output layer becomes the winner. This neuron is the only neuron that produces an output signal. The activity of all other neurons is suppressed in the competition. The lateral feedback connections produce excitatory or inhibitory effects, depending on the distance from the winning neuron. This is achieved by the use of a Mexican hat function which describes synaptic weights between neurons in the Kohonen layer. The lateral feedback connections produce excitatory or inhibitory effects, depending on the distance from the winning neuron. This is achieved by the use of a Mexican hat function which describes synaptic weights between neurons in the Kohonen layer.

5/16/2015Intelligent Systems and Soft Computing22 The Mexican hat function of lateral connection

5/16/2015Intelligent Systems and Soft Computing23 In the Kohonen network, a neuron learns by shifting its weights from inactive connections to active ones. Only the winning neuron and its neighbourhood are allowed to learn. If a neuron does not respond to a given input pattern, then learning cannot occur in that particular neuron. In the Kohonen network, a neuron learns by shifting its weights from inactive connections to active ones. Only the winning neuron and its neighbourhood are allowed to learn. If a neuron does not respond to a given input pattern, then learning cannot occur in that particular neuron. The competitive learning rule defines the change  w ij applied to synaptic weight w ij as The competitive learning rule defines the change  w ij applied to synaptic weight w ij as where x i is the input signal and  is the learning rate parameter.

5/16/2015Intelligent Systems and Soft Computing24 The overall effect of the competitive learning rule resides in moving the synaptic weight vector W j of the winning neuron j towards the input pattern X. The matching criterion is equivalent to the minimum Euclidean distance between vectors. The overall effect of the competitive learning rule resides in moving the synaptic weight vector W j of the winning neuron j towards the input pattern X. The matching criterion is equivalent to the minimum Euclidean distance between vectors. The Euclidean distance between a pair of n-by-1 vectors X and W j is defined by The Euclidean distance between a pair of n-by-1 vectors X and W j is defined by where x i and w ij are the ith elements of the vectors X and W j, respectively. where x i and w ij are the ith elements of the vectors X and W j, respectively.

5/16/2015Intelligent Systems and Soft Computing25 To identify the winning neuron, j X, that best matches the input vector X, we may apply the following condition: To identify the winning neuron, j X, that best matches the input vector X, we may apply the following condition: where m is the number of neurons in the Kohonen layer. where m is the number of neurons in the Kohonen layer.

5/16/2015Intelligent Systems and Soft Computing26 Suppose, for instance, that the 2-dimensional input vector X is presented to the three-neuron Kohonen network, Suppose, for instance, that the 2-dimensional input vector X is presented to the three-neuron Kohonen network, The initial weight vectors, W j, are given by The initial weight vectors, W j, are given by

5/16/2015Intelligent Systems and Soft Computing27 We find the winning (best-matching) neuron j X using the minimum-distance Euclidean criterion: We find the winning (best-matching) neuron j X using the minimum-distance Euclidean criterion: Neuron 3 is the winner and its weight vector W 3 is updated according to the competitive learning rule. Neuron 3 is the winner and its weight vector W 3 is updated according to the competitive learning rule.

5/16/2015Intelligent Systems and Soft Computing28 The updated weight vector W 3 at iteration (p + 1) is determined as: The updated weight vector W 3 at iteration (p + 1) is determined as: The weight vector W 3 of the wining neuron 3 becomes closer to the input vector X with each iteration. The weight vector W 3 of the wining neuron 3 becomes closer to the input vector X with each iteration.

5/16/2015Intelligent Systems and Soft Computing29 Competitive Learning Algorithm Set initial synaptic weights to small random values, say in an interval [0, 1], and assign a small positive value to the learning rate parameter . Step 1: Initialization.

5/16/2015Intelligent Systems and Soft Computing30 Activate the Kohonen network by applying the input vector X, and find the winner-takes-all (best matching) neuron j X at iteration p, using the minimum-distance Euclidean criterion where n is the number of neurons in the input layer, and m is the number of neurons in the Kohonen layer. Step 2: Activation and Similarity Matching.

5/16/2015Intelligent Systems and Soft Computing31 Update the synaptic weights where  w ij (p) is the weight correction at iteration p. The weight correction is determined by the competitive learning rule: where  is the learning rate parameter, and  j (p) is the neighborhood function centered around the winner-takes-all neuron j X at iteration p. Step 3: Learning.

5/16/2015Intelligent Systems and Soft Computing32 Increase iteration p by one, go back to Step 2 and continue until the minimum-distance Euclidean criterion is satisfied, or no noticeable changes occur in the feature map. Step 4: Iteration.

5/16/2015Intelligent Systems and Soft Computing33 Competitive learning in the Kohonen network To illustrate competitive learning, consider the Kohonen network with 100 neurons arranged in the form of a two-dimensional lattice with 10 rows and 10 columns. The network is required to classify two-dimensional input vectors - each neuron in the network should respond only to the input vectors occurring in its region. To illustrate competitive learning, consider the Kohonen network with 100 neurons arranged in the form of a two-dimensional lattice with 10 rows and 10 columns. The network is required to classify two-dimensional input vectors - each neuron in the network should respond only to the input vectors occurring in its region. The network is trained with 1000 two-dimensional input vectors generated randomly in a square region in the interval between –1 and +1. The learning rate parameter a is equal to 0.1. The network is trained with 1000 two-dimensional input vectors generated randomly in a square region in the interval between –1 and +1. The learning rate parameter a is equal to 0.1.

5/16/2015Intelligent Systems and Soft Computing34 Initial random weights W(2,j) W(1,j)

5/16/2015Intelligent Systems and Soft Computing35 Network after 100 iterations W(2,j) W(1,j)

5/16/2015Intelligent Systems and Soft Computing36 Network after 1000 iterations W(2,j) W(1,j)

5/16/2015Intelligent Systems and Soft Computing37 Network after 10,000 iterations W(2,j) W(1,j)