Semiconductors, BP&A Planning, 2003-01-291 DREAM PLAN IDEA IMPLEMENTATION.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Neural Networks Part 4 Dan Simon Cleveland State University 1.
© sebis 1JASS 05 Information Visualization with SOMs Information Visualization with Self-Organizing Maps Software Engineering betrieblicher Informationssysteme.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Data Mining with Neural Networks (HK: Chapter 7.5)
Neural Networks based on Competition
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
 C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA.
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
NEURAL NETWORKS FOR DATA MINING
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised learning: simple competitive learning Biological background: Neurons are wired topographically, nearby neurons connect to nearby neurons.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Creating fuzzy rules from numerical data using a neural network
Introduction to Cluster Analysis
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
A Neural Net For Terrain Classification
Presentation transcript:

Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION

Semiconductors, BP&A Planning, x0x0 xnxn w0w0 wnwn o Threshold units

Semiconductors, BP&A Planning,

4 Teuvo Kohonen Inputs Neurons

Semiconductors, BP&A Planning,  Ideas first introduced by C. von der Malsburg (1973), developed and refined by T. Kohonen (1982)  Neural network algorithm using unsupervised competitive learning  Primarily used for organization and visualization of complex data  Biological basis: ‘brain maps’ Self-Organizing Maps : Origins

Semiconductors, BP&A Planning, Self-Organizing Maps SOM - Architecture  Lattice of neurons ( ‘ nodes ’ ) accepts and responds to set of input signals  Responses compared; ‘ winning ’ neuron selected from lattice  Selected neuron activated together with ‘ neighbourhood ’ neurons  Adaptive process changes weights to more closely resemble inputs 2d array of neurons Set of input signals (connected to all neurons in lattice) Weighted synapses x1x1 x2x2 x3x3 xnxn... w j1 w j2 w j3 w jn j

Semiconductors, BP&A Planning, Self-Organizing Maps SOM – Algorithm Overview 1.Randomly initialise all weights 2.Select input vector x = [x 1, x 2, x 3, …, x n ] 3.Compare x with weights w j for each neuron j to determine winner 4.Update winner so that it becomes more like x, together with the winner’s neighbours 5.Adjust parameters: learning rate & ‘neighbourhood function’ 6.Repeat from (2) until the map has converged (i.e. no noticeable changes in the weights) or pre-defined no. of training cycles have passed

Semiconductors, BP&A Planning, Initialisation Randomly initialise the weights

Semiconductors, BP&A Planning, Finding a Winner Find the best-matching neuron w(x), usually the neuron whose weight vector has smallest Euclidean distance from the input vector x The winning node is that which is in some sense ‘closest’ to the input vector ‘Euclidean distance’ is the straight line distance between the data points, if they were plotted on a (multi-dimensional) graph Euclidean distance between two vectors a and b, a = (a 1,a 2,…,a n ), b = (b 1,b 2,…b n ), is calculated as: Euclidean distance

Semiconductors, BP&A Planning, Weight Update SOM Weight Update Equation w j (t +1) = w j (t) +  (t)  (x) (j,t) [x - w j (t)] “ The weights of every node are updated at each cycle by adding Current learning rate × Degree of neighbourhood with respect to winner × Difference between current weights and input vector to the current weights ” Example of  (t) Example of  (x) (j,t) L. rate No. of cycles –x-axis shows distance from winning node –y-axis shows ‘degree of neighbourhood’ (max. 1)

Semiconductors, BP&A Planning, Kohonen’s Algorithm

Semiconductors, BP&A Planning, Neighborhoods Square and hexagonal grid with neighborhoods based on box distance Grid-lines are not shown

Semiconductors, BP&A Planning, One-dimensional Two-dimensional i Neighborhood of neuron i i

Semiconductors, BP&A Planning,

Semiconductors, BP&A Planning, A neighborhood function  (i, k) indicates how closely neurons i and k in the output layer are connected to each other. Usually, a Gaussian function on the distance between the two neurons in the layer is used:  position of i position of k

Semiconductors, BP&A Planning,

Semiconductors, BP&A Planning, A simple toy example Clustering of the Self Organising Map

Semiconductors, BP&A Planning, However, instead of updating only the winning neuron i*, all neurons within a certain neighborhood N i* (d), of the winning neuron are updated using the Kohonen rule. Specifically, we adjust all such neurons i Ni* (d), as follow Here the neighborhood Ni* (d), contains the indices for all of the neurons that lie within a radius d of the winning neuron i*.

Semiconductors, BP&A Planning, Topologically Correct Maps The aim of unsupervised self-organizing learning is to construct a topologically correct map of the input space.

Semiconductors, BP&A Planning, Self Organizing Map Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector) Move the weight vector w of the winning neuron towards the input i Before learning i w After learning i w

Semiconductors, BP&A Planning, Network Features Input nodes are connected to every neuron The “winner” neuron is the one whose weights are most “similar” to the input Neurons participate in a “winner-take-all” behavior –The winner output is set to 1 and all others to 0 –Only weights to the winner and its neighbors are adapted

Semiconductors, BP&A Planning, P wiwi

Semiconductors, BP&A Planning, w i2 w i1 P1P1 P2P2

Semiconductors, BP&A Planning, output input (n-dimensional) winner

Semiconductors, BP&A Planning,

Semiconductors, BP&A Planning, Example I: Learning a one-dimensional representation of a two-dimensional (triangular) input space:

Semiconductors, BP&A Planning, Some nice illustrations

Semiconductors, BP&A Planning, Some nice illustrations

Semiconductors, BP&A Planning, Some nice illustrations

Semiconductors, BP&A Planning, Self Organizing Map Impose a topological order onto the competitive neurons (e.g., rectangular map) Let neighbors of the winner share the “prize” (The “postcode lottery” principle) After learning, neurons with similar weights tend to cluster on the map

Semiconductors, BP&A Planning, Conclusion Advantages SOM is Algorithm that projects high-dimensional data onto a two-dimensional map. The projection preserves the topology of the data so that similar data items will be mapped to nearby locations on the map. SOM still have many practical applications in pattern recognition, speech analysis, industrial and medical diagnostics, data mining –Disadvantages Large quantity of good quality representative training data required No generally accepted measure of ‘quality’ of a SOM e.g. Average quantization error (how well the data is classified)

Semiconductors, BP&A Planning, Topologies (gridtop, hextop, randtop) pos = gridtop(2,3) pos = plotsom (pos) pos = gridtop(3,2) pos = plotsom (pos)

Semiconductors, BP&A Planning, pos = gridtop(8,10); plotsom(pos)

Semiconductors, BP&A Planning, pos = hextop(2,3) pos =

Semiconductors, BP&A Planning, pos = hextop(3,2) pos = plotsom(pos)

Semiconductors, BP&A Planning, pos = hextop(8,10); plotsom(pos)

Semiconductors, BP&A Planning, pos = randtop(2,3) pos =

Semiconductors, BP&A Planning, pos = randtop(3,2) pos =

Semiconductors, BP&A Planning, pos = randtop(8,10); plotsom(pos)

Semiconductors, BP&A Planning, Distance Funct. (dist, linkdist, mandist, boxdist) pos2 = [ 0 1 2; 0 1 2] pos2 = D2 = dist(pos2) D2 =

Semiconductors, BP&A Planning,

Semiconductors, BP&A Planning, pos = gridtop(2,3) pos = plotsom(pos) d = boxdist(pos) d =

Semiconductors, BP&A Planning, pos = gridtop(2,3) pos = plotsom(pos) d=linkdist(pos) d =

Semiconductors, BP&A Planning, The Manhattan distance between two vectors x and y is calculated as D = sum(abs(x-y)) Thus if we have W1 = [ 1 2; 3 4; 5 6] W1 = and P1= [1;1] P1 = 1 then we get for the distances Z1 = mandist(W1,P1) Z1 = 1 5 9

Semiconductors, BP&A Planning, A One-dimensional Self-organizing Map angles = 0:2*pi/99:2*pi; P = [sin(angles); cos(angles)]; plot(P(1,:),P(2,:),'+r')

Semiconductors, BP&A Planning, net = newsom([-1 1;-1 1],[30]); net.trainParam.epochs = 100; net = train(net,P); plotsom(net.iw{1,1},net.layers{1}.distances) The map can now be used to classify inputs, like [1; 0]: Either neuron 1 or 10 should have an output of 1, as the above input vector was at one end of the presented input space. The first pair of numbers indicate the neuron, and the single number indicates its output. p = [1;0]; a = sim (net, p) a = (1,1) 1

Semiconductors, BP&A Planning, x = -4:0.01:4 P = [x;x.^2]; plot(P(1,:),P(2,:),'+r') net = newsom([-10 10;0 20],[10 10]); net.trainParam.epochs = 100; net = train(net,P); plotsom(net.iw{1,1},net.layers{1}.distances)

Semiconductors, BP&A Planning, Questions? Suggestions ?

Semiconductors, BP&A Planning,