Lecture 12 Self-organizing maps of Kohonen RBF-networks

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
DATA-MINING Artificial Neural Networks Alexey Minin, Jass 2006.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Radial Basis-Function Networks. Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Radial Basis-Function Networks Gaussian response.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
PMR5406 Redes Neurais e Lógica Fuzzy
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Aula 4 Radial Basis Function Networks
Neural Networks Lecture 17: Self-Organizing Maps
WK6 – Self-Organising Networks:
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Intrusion Detection Using Hybrid Neural Networks Vishal Sevani ( )
Self-organizing map Speech and Image Processing Unit Department of Computer Science University of Joensuu, FINLAND Pasi Fränti Clustering Methods: Part.
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised learning: simple competitive learning Biological background: Neurons are wired topographically, nearby neurons connect to nearby neurons.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Computational Intelligence: Methods and Applications
Self-Organizing Maps (SOM) (§ 5.5)
Feature mapping: Self-organizing Maps
Artificial Neural Networks
A Neural Net For Terrain Classification
Presentation transcript:

Lecture 12 Self-organizing maps of Kohonen RBF-networks Soft Computing Lecture 12 Self-organizing maps of Kohonen RBF-networks

Self Organizing Maps Based on competitive learning(Unsupervised) Only one output neuron activated at any one time Winner-takes-all neuron or winning neuron In a Self-Organizing Map Neurons placed at the nodes of a lattice one or two dimensional Neurons selectively tuned to input patterns by a competitive learning process Locations of neurons so tuned to be ordered formation of topographic map of input patterns Spatial locations of the neurons in the lattice -> intrinsic statistical features contained in the input patterns 26.10.2005

Self Organizing Maps Topology-preserving transformation   26.10.2005

SOM as a Neural Model Distinct feature of human brain Organized in such a way that different sensory inputs are represented by topologically ordered computational maps Computational map Basic building block in information-processing infrastructure of the nervous system Array of neurons representing slightly differently tuned processors, operate on the sensory information-bearing signals in parallel 26.10.2005

Basic Feature-mapping models Willshaw-von der Malsburg Model(1976) Biological grounds to explain the problem of retinotopic mapping from the retina to the visual cortex Two 2D lattices : presynaptic, postsynaptic neurons Geometric proximity of presynaptic neurons is coded in the form of correlation, and it is used in postsynaptic lattice Specialized for mapping for same dimension of input and output 26.10.2005

Basic Feature-mapping models Kohonen Model(1982) Captures essential features of computational maps in Brain remains computationally tractable More general and more attention than Willshaw-Malsburg model Capable of dimensionality reduction Class of vector coding algorithm 26.10.2005

Structure of Kohonen’s maps 26.10.2005

Formation Process of SOM After initialization for synaptic weights, there are three essential processes Competition Largest value of discriminant function selected Winner of competition Cooperation Spatial neighbors of winning neuron is selected Synaptic adaptation Excited neurons adjust synaptic weights 26.10.2005

Competitive Process Input vector, synaptic weight vector x = [x1, x2, …, xm]T wj=[wj1, wj2, …, wjm]T, j = 1, 2,3, l Best matching, winning neuron i(x) = arg min ||x-wj||, j =1,2,3,..,l Determine the location where the topological neighborhood of excited neurons is to be centered 26.10.2005

Cooperative Process For a winning neuron, the neurons in its immediate neighborhood excite more than those farther away topological neighborhood decay smoothly with lateral distance Symmetric about maximum point defined by dij = 0 Monotonically decreasing to zero for dij  ∞ Neighborhood function: Gaussian case Size of neighborhood shrinks with time 26.10.2005

Examples of neighborhood function 26.10.2005

Adaptive process Synaptic weight vector is changed in relation with input vector wj(n+1)= wj(n) + (n) hj,i(x)(n) (x - wj(n)) Applied to all neurons inside the neighborhood of winning neuron i Upon repeated presentation of the training data, weight tend to follow the distribution Learning rate (n) : decay with time May decompose two phases Self-organizing or ordering phase : topological ordering of the weight vectors Convergence phase : after ordering, for accurate statistical quantification of the input space 26.10.2005

Summary of SOM Continuous input space of activation patterns that are generated in accordance with a certain probability distribution Topology of the network in the form of a lattice of neurons, which defines a discrete output space Time-varying neighborhood function defined around winning neuron Learning rate decrease gradually with time, but never go to zero 26.10.2005

Summary of SOM(2) Learning Algorithm 1. Initialize w’s 2. Present input vector 3. Find nearest cell i(x) = argminj || x(n) - wj(n) || 4. Update weights of neighbors wj(n+1) = wj(n) +  (n) hj,i(x)(n) [ x(n) - wj(n) ] 5. Reduce neighbors and  6. Go to 2 26.10.2005

SOFM Example 2-D Lattice by 2-D distribution 26.10.2005

Example of implementation typedef struct { /* A LAYER OF A NET: */ INT Units; /* - number of units in this layer */ REAL* Output; /* - output of ith unit */ REAL** Weight; /* - connection weights to ith unit */ REAL* StepSize; /* - size of search steps of ith unit */ REAL* dScoreMean; /* - mean score delta of ith unit */ } LAYER; { /* A NET: */ LAYER* InputLayer; /* - input layer */ LAYER* KohonenLayer; /* - Kohonen layer */ LAYER* OutputLayer; /* - output layer */ INT Winner; /* - last winner in Kohonen layer */ REAL Alpha; /* - learning rate for Kohonen layer */ REAL Alpha_; /* - learning rate for output layer */ REAL Alpha__; /* - learning rate for step sizes */ REAL Gamma; /* - smoothing factor for score deltas */ REAL Sigma; /* - parameter for width of neighborhood */ } NET; 26.10.2005

void PropagateToKohonen(NET* Net) { INT i,j; REAL Out, Weight, Sum, MinOut; for (i=0; i<Net->KohonenLayer->Units; i++) { Sum = 0; for (j=0; j<Net->InputLayer->Units; j++) { Out = Net->InputLayer->Output[j]; Weight = Net->KohonenLayer->Weight[i][j]; Sum += sqr(Out - Weight); } Net->KohonenLayer->Output[i] = sqrt(Sum); MinOut = MAX_REAL; { if (Net->KohonenLayer->Output[i] < MinOut) MinOut = Net->KohonenLayer->Output[Net->Winner = i]; } } void PropagateToOutput(NET* Net) { INT i; for (i=0; i<Net->OutputLayer->Units; i++) { Net->OutputLayer->Output[i] = Net->OutputLayer->Weight[i][Net->Winner]; void PropagateNet(NET* Net) { PropagateToKohonen(Net); PropagateToOutput(Net); } 26.10.2005

REAL Neighborhood(NET* Net, INT i) { INT iRow, iCol, jRow, jCol; REAL Distance; iRow = i / COLS; jRow = Net->Winner / COLS; iCol = i % COLS; jCol = Net->Winner % COLS; Distance = sqrt(sqr(iRow-jRow) + sqr(iCol-jCol)); return exp(-sqr(Distance) / (2*sqr(Net->Sigma))); } void TrainKohonen(NET* Net, REAL* Input) { INT i,j; REAL Out, Weight, Lambda, StepSize; for (i=0; i<Net->KohonenLayer->Units; i++) { for (j=0; j<Net->InputLayer->Units; j++) { Out = Input[j]; Weight = Net->KohonenLayer->Weight[i][j]; Lambda = Neighborhood(Net, i); Net->KohonenLayer->Weight[i][j] += Net->Alpha * Lambda * (Out - Weight); } StepSize = Net->KohonenLayer->StepSize[i]; Net->KohonenLayer->StepSize[i] += Net->Alpha__ * Lambda * -StepSize; } } 26.10.2005

void TrainOutput(NET* Net, REAL* Output) { INT i,j; REAL Out, Weight, Lambda; for (i=0; i<Net->OutputLayer->Units; i++) { for (j=0; j<Net->KohonenLayer->Units; j++) { Out = Output[i]; Weight = Net->OutputLayer->Weight[i][j]; Lambda = Neighborhood(Net, j); Net->OutputLayer->Weight[i][j] += Net->Alpha_ * Lambda * (Out - Weight); } } } void TrainUnits(NET* Net, REAL* Input, REAL* Output) { TrainKohonen(Net, Input); TrainOutput(Net, Output); } 26.10.2005

Radial Basis Function networks (RBF-networks) 26.10.2005

Difference between perceptron and RBF-network 26.10.2005

Basis function 26.10.2005

26.10.2005

Comparison RBF-networks with MLP Advantages More fast learning Disadvantages of Necessary of preliminary setting of neurons and its basis functions Impossibility of forming of secondary complex features during learning MLP is more universal neural network, but more slow. RBF-networks are used in more simple applications. In last years RBF-networks is combined with others models (with genetic algorithms, SOM and so on) 26.10.2005