Presentation is loading. Please wait.

Presentation is loading. Please wait.

Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.

Similar presentations


Presentation on theme: "Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive."— Presentation transcript:

1 Un Supervised Learning & Self Organizing Maps

2 Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive learning means that only a single neuron from each group fires at each time step Output units compete with one another. These are winner takes all units (grandmother cells)

3 UnSupervised Competitive Learning In the hebbian like models, all the neurons can fire together In Competitive Learning models, only one unit (or one per group) can fire at a time Output units compete with one another  Winner Takes All units (“grandmother cells”)

4 US Competitive, Cntd Such networks cluster the data points The number of clusters is not predefined but is limited to the number of output units Applications include VQ, medical diagnosis, document classification and more

5 Simple Competitive Learning x 1 x2x2 xNxN W 11 W 12 W 22 W P1 W PN Y1Y1 Y2Y2 YPYP N inputs units P output neurons P x N weights

6 Simple Model, Cntd All weights are positive and normalized Inputs and outputs are binary Only one unit fires in response to an input

7 Network Activation The unit with the highest field h i fires i* is the winner unit Geometrically is closest to the current input vector The winning unit’s weight vector is updated to be even closer to the current input vector Possible variation: adding lateral inhibition

8 Learning Starting with small random weights, at each step: 1.a new input vector is presented to the network 2.all fields are calculated to find a winner 3. is updated to be closer to the input

9 Learning Rule Standard Competitive Learning Can be formulated as hebbian :

10 Result Each output unit moves to the center of mass of a cluster of input vectors  clustering

11 Competitive Learning, Cntd It is important to break the symmetry in the initial random weights Final configuration depends on initialization –A winning unit has more chances of winning the next time a similar input is seen –Some outputs may never fire –This can be compensated by updating the non winning units with a smaller update

12 Model: Horizontal & Vertical lines Rumelhart & Zipser, 1985 Problem – identify vertical or horizontal signals Inputs are 6 x 6 arrays Intermediate layer with 8 WTA units Output layer with 2 WTA units Cannot work with one layer

13 Rumelhart & Zipser, Cntd HV

14 Geometrical Interpretation So far the ordering of the output units themselves was not necessarily informative The location of the winning unit can give us information regarding similarities in the data We are looking for an input output mapping that conserves the topologic properties of the inputs  feature mapping Given any two spaces, it is not guaranteed that such a mapping exits!

15 Biological Motivation In the brain, sensory inputs are represented by topologically ordered computational maps –Tactile inputs –Visual inputs (center-surround, ocular dominance, orientation selectivity) –Acoustic inputs

16 Biological Motivation, Cntd Computational maps are a basic building block of sensory information processing A computational map is an array of neurons representing slightly different tuned processors (filters) that operate in parallel on sensory signals These neurons transform input signals into a place coded structure

17 Self Organizing (Kohonen) Maps Competitive networks (WTA neurons) Output neurons are placed on a lattice, usually 2- dimensional Neurons become selectively tuned to various input patterns (stimuli) The location of the tuned (winning) neurons become ordered in such a way that creates a meaningful coordinate system for different input features  a topographic map of input patterns is formed

18 SOMs, Cntd Spatial locations of the neurons in the map are indicative of statistical features that are present in the inputs (stimuli)  Self Organization

19 Kohonen Maps Simple case: 2-d input and 2-d output layer No lateral connections Weight update is done for the winning neuron and its surrounding neighborhood

20 Neighborhood Function F is maximal for i* and drops to zero far from i, for example: The update “pulls” the winning unit (weight vector) to be closer to the input, and also drags the close neighbors of this unit 

21 The output layer is a sort of an elastic net that wants to come as close as possible to the inputs The output maps conserves the topological relationships of the inputs Both η and σ can be changed during the learning

22 Feature Mapping

23 Topologic Maps in the Brain Examples of topologic conserving mapping between input and output spaces –Retintopoical mapping between the retina and the cortex –Ocular dominance –Somatosensory mapping (the homunculus)

24 Models Goodhill (1993) proposed a model for the development of retinotopy and ocular dominance, based on Kohonen Maps –Two retinas project to a single layer of cortical neurons –Retinal inputs were modeled by random dots patterns –Added between eyes correlation in the inputs –The result is an ocular dominance map and a retinotopic map as well

25

26 Models, Cntd Farah (1998) proposed an explanation for the spatial ordering of the homunculus using a simple SOM. –In the womb, the fetus lies with its hands close to its face, and its feet close to its genitals –This should explain the order of the somatosensory areas in the homunculus

27 Other Models Semantic self organizing maps to model language acquisition Kohonen feature mapping to model layered organization in the LGN Combination of unsupervised and supervised learning to model complex computations in the visual cortex

28 Examples of Applications Kohonen (1984). Speech recognition - a map of phonemes in the Finish language Optical character recognition - clustering of letters of different fonts Angeliol etal (1988) – travelling salesman problem (an optimization problem) Kohonen (1990) – learning vector quantization (pattern classification problem) Ritter & Kohonen (1989) – semantic maps

29 Summary Unsupervised learning is very common US learning requires redundancy in the stimuli Self organization is a basic property of the brain’s computational structure SOMs are based on –competition (wta units) –cooperation –synaptic adaptation SOMs conserve topological relationships between the stimuli Artificial SOMs have many applications in computational neuroscience


Download ppt "Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive."

Similar presentations


Ads by Google