Self-Organizing Network Model (SOM) Session 11

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
An Illustrative Example
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Aula 4 Radial Basis Function Networks
WK6 – Self-Organising Networks:
Lecture 09 Clustering-based Learning
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Intrusion Detection Using Hybrid Neural Networks Vishal Sevani ( )
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
NEURAL NETWORKS FOR DATA MINING
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Institute for Advanced Studies in Basic Sciences – Zanjan Kohonen Artificial Neural Networks in Analytical Chemistry Mahdi Vasighi.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Chapter 5 Unsupervised learning
Neural Network Architecture Session 2
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Unsupervised Learning - Clustering 04/03/17
Unsupervised Learning - Clustering
Self organizing networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Hebb and Perceptron.
Lecture 22 Clustering (3).
Neuro-Computing Lecture 4 Radial Basis Function Network
Neural Networks and Their Application in the Fields of Coporate Finance By Eric Séverin Hanna Viinikainen.
Computational Intelligence: Methods and Applications
Introduction to Cluster Analysis
Feature mapping: Self-organizing Maps
The Network Approach: Mind as a Web
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

Self-Organizing Network Model (SOM) Session 11 Course : T0293 – NEURO COMPUTING Year : 2013 Self-Organizing Network Model (SOM) Session 11

Learning Outcomes At the end of this session, student will be able to: Explain the concept of self-organizing network model (SOM) and create applications using the concept (LO4) T0293 - Neuro Computing

Lecture Outline What is SOM? Mapping Models T0293 - Neuro Computing

What is SOM? SOM is a neural networks without a teacher, i.e., unsupervised. SOM perform learning based grouping of input data (clustering of input data). Clustering is essentially the classifier of similar objects and separate the different objects. These networks are based on competitive learning; the output neurons of the network compete among themselves to be activated or fired, with the result that only one output neuron, or one neuron per group, is on at any one time. T0293 - Neuro Computing

Why SOM SOM able to learn to classify data without supervision. You may already be aware of supervised training techniques such as backpropagation where the training data consists of vector pairs - an input vector and a target vector. http://www.ai-junkie.com/ann/som/som1.html T0293 - Neuro Computing

T0293 - Neuro Computing

In a self-organizing map, the neurons are placed at the nodes of a lattice that is usually one or two dimensional. As a neural model, the SOM provides a bridge between two levels of adaptation: Adaptation rules formulated at the microscopic level of a single neuron; Formation of experientially better and physically accessible patterns of feature selectivity at the microscopic level of neural layers. T0293 - Neuro Computing

Figure 11.2 shows the layout of the two models. In both cases, the output neurons are arranged in a two-dimensional lattice. The models differ from each other in the manner in which the input pattern are specified. T0293 - Neuro Computing

Figure 11.1 Two-dimensional lattice of neurons, illustrated for a three-dimensional input and four-by-four dimensional output (all shown in red). Input vector The principal goal of the self-organizing map (SOM) is to transform an incoming signal pattern of arbitrary dimension into a one- or two-dimensional discrete map The algorithm performs this transformation adaptively in a topologically ordered fashion. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: 978-0-13-129376-2 T0293 - Neuro Computing

Once the network has been properly initialized, there are three essential processes involved in the formation of the self-organizing map: Competition. For each input pattern, the neuron in the network compute their respective values of a discriminant function. This discriminant function provides the basis for competition among the neurons. The particular neuron with the largest value of discriminant function is declared winner of the competition. Cooperation. The winning neuron determines the spatial location of a topological neighborhood of excited neurons, thereby providing the basis for cooperation among such neighboring neurons. Synaptic Adaptation. This last mechanism enables the excited neurons to increase their individual values of the discriminant function in relation to the input pattern through suitable adjustments applied to their synaptic weights. The adjustments made are such that the response of the winning neuron to the subsequent application of a similar input pattern is enhanced. T0293 - Neuro Computing

The SOM Algorithm The essence of SOM algorithm is that it substitutes a simple geometric computation for the more detailed properties of the Hebb-like rule and lateral interactions. The essential ingredients and parameters of the algorithm are: A continuous input space of activation patterns that are generated in accordance with a certain probability distribution; A topology of the network in the form of a lattice of neurons, which defines a discrete output space; A time-varying neighborhood function hj,i(x)(n) that is defined around a winning neuron i(x); A learning-rate parameter that starts at initial value and then decreases gradually with time n, but never goes to zero. T0293 - Neuro Computing

The algorithm is summarized as follows: There are three basic steps involved in the application of the algorithm after initialization: sampling, similarity matching, and updating. The algorithm is summarized as follows: Initialization. Choose random values for the initial weight vectors wj(0). The only restriction here is that the wj(0) be different for j = 1, 2, …, l, where l is the number of neurons in the lattice. It may be desirable to keep the magnitude of the weights small. Sampling. Draw a sample x from the input space with a certain probability; the vector x represents the activation pattern that is applied to the lattice. The dimension of vector x is equal to m. Similarity matching. Find the best-matching (winning) neuron i(x) at time-step n by using the minimum-distance criterion. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: 978-0-13-129376-2 T0293 - Neuro Computing

Updating. Adjust the synaptic-weight vectors of all excited neurons by using the update formula: where is the learning-rate parameter and is the neighborhood function centered around the winning neuron i(x); both and are varied dynamically during learning for best results. Continuation. Continue with step 2 until no noticeable changes in the feature map are observed. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: 978-0-13-129376-2 T0293 - Neuro Computing

Summary The essence of Kohonen’s SOM algorithm is that it substitutes a simple geometric computation for the more detailed properties of the Hebb-like rule and lateral interactions. There are three basic steps involved in the application of the algorithm after initialization: sampling, similarity matching, and updating. T0293 - Neuro Computing

Example http://www.codeproject.com/Articles/16273/Self-Organizing-Feature-Maps-Kohonen-maps http://www.mql5.com/en/articles/283 T0293 - Neuro Computing

The SOM Algorithm The essence of SOM algorithm is that it substitutes a simple geometric computation for the more detailed properties of the Hebb-like rule and lateral interactions. The essential ingredients and parameters of the algorithm are: A continuous input space of activation patterns that are generated in accordance with a certain probability distribution; A topology of the network in the form of a lattice of neurons, which defines a discrete output space; A time-varying neighborhood function hj,i(x)(n) that is defined around a winning neuron i(x); A learning-rate parameter that starts at initial value and then decreases gradually with time n, but never goes to zero. T0293 - Neuro Computing

The algorithm is summarized as follows: There are three basic steps involved in the application of the algorithm after initialization: sampling, similarity matching, and updating. The algorithm is summarized as follows: Initialization. Choose random values for the initial weight vectors wj(0). The only restriction here is that the wj(0) be different for j = 1, 2, …, l, where l is the number of neurons in the lattice. It may be desirable to keep the magnitude of the weights small. Sampling. Draw a sample x from the input space with a certain probability; the vector x represents the activation pattern that is applied to the lattice. The dimension of vector x is equal to m. Similarity matching. Find the best-matching (winning) neuron i(x) at time-step n by using the minimum-distance criterion. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: 978-0-13-129376-2 T0293 - Neuro Computing

Updating. Adjust the synaptic-weight vectors of all excited neurons by using the update formula: where is the learning-rate parameter and is the neighborhood function centered around the winning neuron i(x); both and are varied dynamically during learning for best results. Continuation. Continue with step 2 until no noticeable changes in the feature map are observed. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: 978-0-13-129376-2 T0293 - Neuro Computing

Demo calculation page 172 T0293 - Neuro Computing

T0293 - Neuro Computing

T0293 - Neuro Computing

T0293 - Neuro Computing

T0293 - Neuro Computing

T0293 - Neuro Computing

T0293 - Neuro Computing

T0293 - Neuro Computing

T0293 - Neuro Computing

Quiz There are 2 models of Self-Organizing Maps (SOM), Kohonen Model, and Willshaw-von der Malsburg’s. Draw and explain both of them in 2 dimensional lattice. Please Explain three essential processes involved in the formation of the self-organizing map. Please explain three basic steps of SOM involved in the application of the algorithm after initialization: sampling, similarity matching, and updating. Please explain about BAM (Bidirectional Associative Memory) T0293 - Neuro Computing

Summary SOM is a neural networks without a teacher, i.e., unsupervised. SOM perform learning based grouping of input data (clustering of input data). There are two basic feature-mapping models: Willshaw-von der Malsburg’s model and Kohonen model T0293 - Neuro Computing

References Textbook Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: 978-0-13-129376-2 Fausett, L.V. (1994). Fundamentals of Neural Networks: Architectures, Algorithms, and Applications, 1st ed. Prentice Hall. New York. ISBN: 0-13-3-34186-0 Web http://en.wikipedia.org/wiki/Self-organizing_map T0293 - Neuro Computing

END T0293 - Neuro Computing