Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Introduction to Neural Networks
Bioinspired Computing Lecture 14
Chapter 2.
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
DATA-MINING Artificial Neural Networks Alexey Minin, Jass 2006.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Lecture 09 Clustering-based Learning
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Parallel Artificial Neural Networks Ian Wesley-Smith Frameworks Division Center for Computation and Technology Louisiana State University
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Chapter 3: Neural Processing and Perception. Neural Processing and Perception Neural processing is the interaction of signals in many neurons.
Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
Self Organizing Maps: Clustering With unsupervised learning there is no instruction and the network is left to cluster patterns. All of the patterns within.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised learning: simple competitive learning Biological background: Neurons are wired topographically, nearby neurons connect to nearby neurons.
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Unsupervised learning
Lecture 22 Clustering (3).
Volume 56, Issue 2, Pages (October 2007)
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Competitive Networks.
Computational Intelligence: Methods and Applications
Competitive Networks.
Self-Organizing Maps (SOM) (§ 5.5)
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Feature mapping: Self-organizing Maps
Artificial Neural Networks
Volume 56, Issue 2, Pages (October 2007)
Presentation transcript:

Un Supervised Learning & Self Organizing Maps

Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive learning means that only a single neuron from each group fires at each time step Output units compete with one another. These are winner takes all units (grandmother cells)

UnSupervised Competitive Learning In the hebbian like models, all the neurons can fire together In Competitive Learning models, only one unit (or one per group) can fire at a time Output units compete with one another  Winner Takes All units (“grandmother cells”)

US Competitive, Cntd Such networks cluster the data points The number of clusters is not predefined but is limited to the number of output units Applications include VQ, medical diagnosis, document classification and more

Simple Competitive Learning x 1 x2x2 xNxN W 11 W 12 W 22 W P1 W PN Y1Y1 Y2Y2 YPYP N inputs units P output neurons P x N weights

Simple Model, Cntd All weights are positive and normalized Inputs and outputs are binary Only one unit fires in response to an input

Network Activation The unit with the highest field h i fires i* is the winner unit Geometrically is closest to the current input vector The winning unit’s weight vector is updated to be even closer to the current input vector Possible variation: adding lateral inhibition

Learning Starting with small random weights, at each step: 1.a new input vector is presented to the network 2.all fields are calculated to find a winner 3. is updated to be closer to the input

Learning Rule Standard Competitive Learning Can be formulated as hebbian :

Result Each output unit moves to the center of mass of a cluster of input vectors  clustering

Competitive Learning, Cntd It is important to break the symmetry in the initial random weights Final configuration depends on initialization –A winning unit has more chances of winning the next time a similar input is seen –Some outputs may never fire –This can be compensated by updating the non winning units with a smaller update

Model: Horizontal & Vertical lines Rumelhart & Zipser, 1985 Problem – identify vertical or horizontal signals Inputs are 6 x 6 arrays Intermediate layer with 8 WTA units Output layer with 2 WTA units Cannot work with one layer

Rumelhart & Zipser, Cntd HV

Geometrical Interpretation So far the ordering of the output units themselves was not necessarily informative The location of the winning unit can give us information regarding similarities in the data We are looking for an input output mapping that conserves the topologic properties of the inputs  feature mapping Given any two spaces, it is not guaranteed that such a mapping exits!

Biological Motivation In the brain, sensory inputs are represented by topologically ordered computational maps –Tactile inputs –Visual inputs (center-surround, ocular dominance, orientation selectivity) –Acoustic inputs

Biological Motivation, Cntd Computational maps are a basic building block of sensory information processing A computational map is an array of neurons representing slightly different tuned processors (filters) that operate in parallel on sensory signals These neurons transform input signals into a place coded structure

Self Organizing (Kohonen) Maps Competitive networks (WTA neurons) Output neurons are placed on a lattice, usually 2- dimensional Neurons become selectively tuned to various input patterns (stimuli) The location of the tuned (winning) neurons become ordered in such a way that creates a meaningful coordinate system for different input features  a topographic map of input patterns is formed

SOMs, Cntd Spatial locations of the neurons in the map are indicative of statistical features that are present in the inputs (stimuli)  Self Organization

Kohonen Maps Simple case: 2-d input and 2-d output layer No lateral connections Weight update is done for the winning neuron and its surrounding neighborhood

Neighborhood Function F is maximal for i* and drops to zero far from i, for example: The update “pulls” the winning unit (weight vector) to be closer to the input, and also drags the close neighbors of this unit 

The output layer is a sort of an elastic net that wants to come as close as possible to the inputs The output maps conserves the topological relationships of the inputs Both η and σ can be changed during the learning

Feature Mapping

Topologic Maps in the Brain Examples of topologic conserving mapping between input and output spaces –Retintopoical mapping between the retina and the cortex –Ocular dominance –Somatosensory mapping (the homunculus)

Models Goodhill (1993) proposed a model for the development of retinotopy and ocular dominance, based on Kohonen Maps –Two retinas project to a single layer of cortical neurons –Retinal inputs were modeled by random dots patterns –Added between eyes correlation in the inputs –The result is an ocular dominance map and a retinotopic map as well

Models, Cntd Farah (1998) proposed an explanation for the spatial ordering of the homunculus using a simple SOM. –In the womb, the fetus lies with its hands close to its face, and its feet close to its genitals –This should explain the order of the somatosensory areas in the homunculus

Other Models Semantic self organizing maps to model language acquisition Kohonen feature mapping to model layered organization in the LGN Combination of unsupervised and supervised learning to model complex computations in the visual cortex

Examples of Applications Kohonen (1984). Speech recognition - a map of phonemes in the Finish language Optical character recognition - clustering of letters of different fonts Angeliol etal (1988) – travelling salesman problem (an optimization problem) Kohonen (1990) – learning vector quantization (pattern classification problem) Ritter & Kohonen (1989) – semantic maps

Summary Unsupervised learning is very common US learning requires redundancy in the stimuli Self organization is a basic property of the brain’s computational structure SOMs are based on –competition (wta units) –cooperation –synaptic adaptation SOMs conserve topological relationships between the stimuli Artificial SOMs have many applications in computational neuroscience