Download presentation

Presentation is loading. Please wait.

Published byElinor Sharp Modified over 4 years ago

1
Competitive learning College voor cursus Connectionistische modellen M Meeter

2
2 Unsupervised learning To-be-learned patterns not wholly provided by modeller u Hebbian unsupervised learning u Competitive learning

3
3 The basic idea © Rumelhart & Zipser, 1986

4
4 What’s it good for? n discovering structure in the input n discovering categories in the input u Classification networks: ART (Grossberg & Carpenter) CALM (Murre & Phaf) n mapping inputs onto a topographic map u Kohonen maps (Kohonen) u CALM - Maps (Murre & Phaf)

5
5 Features of Competitive learning n Two or more layers (no auto-association) n Competition between output nodes n Two phases: u determining a winner u learning n Weight normalisation

6
6 Two or more layers Input must come from outside the inhibitory clusters © Rumelhart & Zipser, 1986

7
7 Competition between output nodes n At every presentation of an input pattern, a winner is determined n Only winner is activated [activation at learning discrete: (0,1) ] u Hard Winner Take All: Find node with maximum input max. ( w ij a j ) u Inhibition between nodes

8
8 Inhibition between nodes n Example: inhibition in CALM

9
9 Two phases 1.One node wins the competition 2.That node learns, others not n Nodes start off with random weights n No ‘correct’ output connected with inputs: unsupervised learning

10
10 Weight normalisation n Weights of winner node i changed w ij = * a j n Weights add up to constant sum... w ij = 1 rule of Rumelhart & Zipser: w ij = g * a i / n k - g * w ij n …or constant distance: (w ij ) 2 = 1

11
11 Geometric interpretation n Both weights & input patterns can be seen as vectors in a hyper space n Euclidian normalisation [ (w ij ) 2 = 1] u all vectors on a sphere in space of n dimensions (n = number of inputs) u node with weight vector closest to input vector is winner n Linear normalisation [ w ij = 1] u all weights on a plane

12
12 Geometric interpretation II n Weight vectors move towards input in the hyper space w ij = g * a i /n k - g * w ij n Output nodes move towards clusters in inputs © Rumelhart & Zipser, 1986

13
13 Stable / unstable n Output nodes move towards clusters in inputs n If input not clustered......output nodes will continue moving through input space! © Rumelhart & Zipser, 1986

14
14 Statistical equivalents n Sarle (1994): Classification = k-means clustering Kohonen = mapping continuous dimensions onto discrete ones u Statistical techniques usually more efficient... u...because statistical techniques use whole data set

15
15 Importance of competitive learning n Supervised - unsupervised learning n Structure input sets not always given n Natural categories

16
16 Competitive learning in the brain n Lateral inhibition feature of most parts of the brain … Implements winner-take-all ?

17
17 Part II

18
18 Map formation in the brain n Topographic maps omnipresent in the sensory regions of the brain u retinotopic maps: neurons ordered as the locations of their visual field on the retina u tonotopic maps: neurons ordered according to tone for which they are sensitive u maps in somatosensory cortex: neurons ordered according to body part for which they are sensitive u maps in motor cortex: neurons ordered according to muscles they control

19
19 Somatosensory maps © Kandel, Schwartz & Jessell, 1991

20
20 Somatosensory maps II © Kandel, Schwartz & Jessell, 1991

21
21 Speculations n Map formation ubiquitous (also semantic maps?) n How do maps form? u gradients in neurotransmitters u pruning

22
22 Kohonen maps n Teuvo Kohonen first to show how maps can develop n Self Organising Maps (S.O.M.) n Demonstration: the ordering of colours (colours are vectors in a 3-dimensional space of brightness, hue, saturation).

23
23 Kohonen algorithm n Finding the activity bubble n Updating the weights for the nodes in the active bubble

24
24 Finding the activity bubble Lateral inhibition

25
25 Finding activity bubble II n Find the winner n Activate all nodes in the neighbourhood of the winner

26
26 Updating the weights n Move weight vector of winner towards the input vector n Do the same for the active neighbourhood nodes weight vectors of neigbouring nodes will start resembling each other

27
27 Simplest implementation n Weight vectors & input patterns all have length 1 (e.i., (w ij ) 2 = 1 ) n Find node whose weight vector has mimimal distance to the input vector: min. (a j - w ij ) 2 n Activate all nodes in neighbourhood radius N t n Update weights of active nodes by moving weights towards the input vector: w ij = t * ( a j - w ij ) w ij (t+1) = w ij (t) + t * ( a j - w ij (t) )

28
28 Results of Kohonen © Kohonen, 1982

29
29 Influence of neighbourhood radius © Kohonen, 1982 Larger neighbourhood size leads to faster learning

30
30 Results II: the phonological typewriter © Kohonen, 1988

31
31 Phonological typewriter II © Kohonen, 1988

32
32 Kohonen conclusions n Darn elegant n Pruning? n Speech recognition uses Hidden Markov Models

33
33 Summary n Prime example of unsupervised learning n Two phases: u winner node is determined u weights are updated of the winner only n Very good at discovering structure: u discovering categories u mapping the input onto a topographic map n Competitive learning important paradigm in connectionism

Similar presentations

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google