Competitive learning College voor cursus Connectionistische modellen M Meeter.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Bioinspired Computing Lecture 14
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Outcomes  Look at the theory of self-organisation.  Other self-organising networks  Look at examples of neural network applications.
-Artificial Neural Network- Counter Propagation Network
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Modular Neural Networks: SOM, ART, and CALM Jaap Murre University of Amsterdam University of Maastricht
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
UNSUPERVISED LEARNING NETWORKS
Unsupervised Learning
381 Self Organization Map Learning without Examples.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Machine Learning 12. Local Models.
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Unsupervised Learning and Neural Networks
CSE P573 Applications of Artificial Intelligence Neural Networks
Dr. Unnikrishnan P.C. Professor, EEE
Unsupervised learning
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
CSE 573 Introduction to Artificial Intelligence Neural Networks
Computational Intelligence: Methods and Applications
Self-Organizing Maps (SOM) (§ 5.5)
Feature mapping: Self-organizing Maps
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

Competitive learning College voor cursus Connectionistische modellen M Meeter

2 Unsupervised learning To-be-learned patterns not wholly provided by modeller u Hebbian unsupervised learning u Competitive learning

3 The basic idea © Rumelhart & Zipser, 1986

4 What’s it good for? n discovering structure in the input n discovering categories in the input u Classification networks: ART (Grossberg & Carpenter) CALM (Murre & Phaf) n mapping inputs onto a topographic map u Kohonen maps (Kohonen) u CALM - Maps (Murre & Phaf)

5 Features of Competitive learning n Two or more layers (no auto-association) n Competition between output nodes n Two phases: u determining a winner u learning n Weight normalisation

6 Two or more layers Input must come from outside the inhibitory clusters © Rumelhart & Zipser, 1986

7 Competition between output nodes n At every presentation of an input pattern, a winner is determined n Only winner is activated [activation at learning discrete: (0,1) ] u Hard Winner Take All: Find node with maximum input max.  ( w ij a j ) u Inhibition between nodes

8 Inhibition between nodes n Example: inhibition in CALM

9 Two phases 1.One node wins the competition 2.That node learns, others not n Nodes start off with random weights n No ‘correct’ output connected with inputs: unsupervised learning

10 Weight normalisation n Weights of winner node i changed  w ij =  * a j n Weights add up to constant sum...  w ij = 1 rule of Rumelhart & Zipser:  w ij = g * a i / n k - g * w ij n …or constant distance:  (w ij ) 2 = 1

11 Geometric interpretation n Both weights & input patterns can be seen as vectors in a hyper space n Euclidian normalisation [  (w ij ) 2 = 1] u all vectors on a sphere in space of n dimensions (n = number of inputs) u node with weight vector closest to input vector is winner n Linear normalisation [  w ij = 1] u all weights on a plane

12 Geometric interpretation II n Weight vectors move towards input in the hyper space  w ij = g * a i /n k - g * w ij n Output nodes move towards clusters in inputs © Rumelhart & Zipser, 1986

13 Stable / unstable n Output nodes move towards clusters in inputs n If input not clustered......output nodes will continue moving through input space! © Rumelhart & Zipser, 1986

14 Statistical equivalents n Sarle (1994): Classification = k-means clustering Kohonen = mapping continuous dimensions onto discrete ones u Statistical techniques usually more efficient... u...because statistical techniques use whole data set

15 Importance of competitive learning n Supervised - unsupervised learning n Structure input sets not always given n Natural categories

16 Competitive learning in the brain n Lateral inhibition feature of most parts of the brain … Implements winner-take-all ?

17 Part II

18 Map formation in the brain n Topographic maps omnipresent in the sensory regions of the brain u retinotopic maps: neurons ordered as the locations of their visual field on the retina u tonotopic maps: neurons ordered according to tone for which they are sensitive u maps in somatosensory cortex: neurons ordered according to body part for which they are sensitive u maps in motor cortex: neurons ordered according to muscles they control

19 Somatosensory maps © Kandel, Schwartz & Jessell, 1991

20 Somatosensory maps II © Kandel, Schwartz & Jessell, 1991

21 Speculations n Map formation ubiquitous (also semantic maps?) n How do maps form? u gradients in neurotransmitters u pruning

22 Kohonen maps n Teuvo Kohonen first to show how maps can develop n Self Organising Maps (S.O.M.) n Demonstration: the ordering of colours (colours are vectors in a 3-dimensional space of brightness, hue, saturation).

23 Kohonen algorithm n Finding the activity bubble n Updating the weights for the nodes in the active bubble

24 Finding the activity bubble Lateral inhibition

25 Finding activity bubble II n Find the winner n Activate all nodes in the neighbourhood of the winner

26 Updating the weights n Move weight vector of winner towards the input vector n Do the same for the active neighbourhood nodes  weight vectors of neigbouring nodes will start resembling each other

27 Simplest implementation n Weight vectors & input patterns all have length 1 (e.i.,  (w ij ) 2 = 1 ) n Find node whose weight vector has mimimal distance to the input vector: min.  (a j - w ij ) 2 n Activate all nodes in neighbourhood radius N t n Update weights of active nodes by moving weights towards the input vector:  w ij =  t * ( a j - w ij ) w ij (t+1) = w ij (t) +  t * ( a j - w ij (t) )

28 Results of Kohonen © Kohonen, 1982

29 Influence of neighbourhood radius © Kohonen, 1982 Larger neighbourhood size leads to faster learning

30 Results II: the phonological typewriter © Kohonen, 1988

31 Phonological typewriter II © Kohonen, 1988

32 Kohonen conclusions n Darn elegant n Pruning? n Speech recognition uses Hidden Markov Models

33 Summary n Prime example of unsupervised learning n Two phases: u winner node is determined u weights are updated of the winner only n Very good at discovering structure: u discovering categories u mapping the input onto a topographic map n Competitive learning important paradigm in connectionism