Presentation is loading. Please wait.

Presentation is loading. Please wait.

Semiconductors, BP&A Planning, 2003-01-291 DREAM PLAN IDEA IMPLEMENTATION.

Similar presentations


Presentation on theme: "Semiconductors, BP&A Planning, 2003-01-291 DREAM PLAN IDEA IMPLEMENTATION."— Presentation transcript:

1 Semiconductors, BP&A Planning, 2003-01-291 DREAM PLAN IDEA IMPLEMENTATION

2 Semiconductors, BP&A Planning, 2003-01-292 x0x0 xnxn w0w0 wnwn o Threshold units

3 Semiconductors, BP&A Planning, 2003-01-293

4 4 Teuvo Kohonen Inputs Neurons

5 Semiconductors, BP&A Planning, 2003-01-295  Ideas first introduced by C. von der Malsburg (1973), developed and refined by T. Kohonen (1982)  Neural network algorithm using unsupervised competitive learning  Primarily used for organization and visualization of complex data  Biological basis: ‘brain maps’ Self-Organizing Maps : Origins

6 Semiconductors, BP&A Planning, 2003-01-296 Self-Organizing Maps SOM - Architecture  Lattice of neurons ( ‘ nodes ’ ) accepts and responds to set of input signals  Responses compared; ‘ winning ’ neuron selected from lattice  Selected neuron activated together with ‘ neighbourhood ’ neurons  Adaptive process changes weights to more closely resemble inputs 2d array of neurons Set of input signals (connected to all neurons in lattice) Weighted synapses x1x1 x2x2 x3x3 xnxn... w j1 w j2 w j3 w jn j

7 Semiconductors, BP&A Planning, 2003-01-297 Self-Organizing Maps SOM – Algorithm Overview 1.Randomly initialise all weights 2.Select input vector x = [x 1, x 2, x 3, …, x n ] 3.Compare x with weights w j for each neuron j to determine winner 4.Update winner so that it becomes more like x, together with the winner’s neighbours 5.Adjust parameters: learning rate & ‘neighbourhood function’ 6.Repeat from (2) until the map has converged (i.e. no noticeable changes in the weights) or pre-defined no. of training cycles have passed

8 Semiconductors, BP&A Planning, 2003-01-298 Initialisation Randomly initialise the weights

9 Semiconductors, BP&A Planning, 2003-01-299 Finding a Winner Find the best-matching neuron w(x), usually the neuron whose weight vector has smallest Euclidean distance from the input vector x The winning node is that which is in some sense ‘closest’ to the input vector ‘Euclidean distance’ is the straight line distance between the data points, if they were plotted on a (multi-dimensional) graph Euclidean distance between two vectors a and b, a = (a 1,a 2,…,a n ), b = (b 1,b 2,…b n ), is calculated as: Euclidean distance

10 Semiconductors, BP&A Planning, 2003-01-2910 Weight Update SOM Weight Update Equation w j (t +1) = w j (t) +  (t)  (x) (j,t) [x - w j (t)] “ The weights of every node are updated at each cycle by adding Current learning rate × Degree of neighbourhood with respect to winner × Difference between current weights and input vector to the current weights ” Example of  (t) Example of  (x) (j,t) L. rate No. of cycles –x-axis shows distance from winning node –y-axis shows ‘degree of neighbourhood’ (max. 1)

11 Semiconductors, BP&A Planning, 2003-01-2911 Kohonen’s Algorithm

12 Semiconductors, BP&A Planning, 2003-01-2912 Neighborhoods Square and hexagonal grid with neighborhoods based on box distance Grid-lines are not shown

13 Semiconductors, BP&A Planning, 2003-01-2913 One-dimensional Two-dimensional i Neighborhood of neuron i i

14 Semiconductors, BP&A Planning, 2003-01-2914

15 Semiconductors, BP&A Planning, 2003-01-2915 A neighborhood function  (i, k) indicates how closely neurons i and k in the output layer are connected to each other. Usually, a Gaussian function on the distance between the two neurons in the layer is used:  position of i position of k

16 Semiconductors, BP&A Planning, 2003-01-2916

17 Semiconductors, BP&A Planning, 2003-01-2917 A simple toy example Clustering of the Self Organising Map

18 Semiconductors, BP&A Planning, 2003-01-2918 However, instead of updating only the winning neuron i*, all neurons within a certain neighborhood N i* (d), of the winning neuron are updated using the Kohonen rule. Specifically, we adjust all such neurons i Ni* (d), as follow Here the neighborhood Ni* (d), contains the indices for all of the neurons that lie within a radius d of the winning neuron i*.

19 Semiconductors, BP&A Planning, 2003-01-2919 Topologically Correct Maps The aim of unsupervised self-organizing learning is to construct a topologically correct map of the input space.

20 Semiconductors, BP&A Planning, 2003-01-2920 Self Organizing Map Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector) Move the weight vector w of the winning neuron towards the input i Before learning i w After learning i w

21 Semiconductors, BP&A Planning, 2003-01-2921 Network Features Input nodes are connected to every neuron The “winner” neuron is the one whose weights are most “similar” to the input Neurons participate in a “winner-take-all” behavior –The winner output is set to 1 and all others to 0 –Only weights to the winner and its neighbors are adapted

22 Semiconductors, BP&A Planning, 2003-01-2922 P 123456789 wiwi

23 Semiconductors, BP&A Planning, 2003-01-2923 1 2 3 4 5 6 7 8 9 w i2 w i1 P1P1 P2P2

24 Semiconductors, BP&A Planning, 2003-01-2924 output input (n-dimensional) winner

25 Semiconductors, BP&A Planning, 2003-01-2925

26 Semiconductors, BP&A Planning, 2003-01-2926 Example I: Learning a one-dimensional representation of a two-dimensional (triangular) input space: 0 25000 20 100100010000

27 Semiconductors, BP&A Planning, 2003-01-2927 Some nice illustrations

28 Semiconductors, BP&A Planning, 2003-01-2928 Some nice illustrations

29 Semiconductors, BP&A Planning, 2003-01-2929 Some nice illustrations

30 Semiconductors, BP&A Planning, 2003-01-2930 Self Organizing Map Impose a topological order onto the competitive neurons (e.g., rectangular map) Let neighbors of the winner share the “prize” (The “postcode lottery” principle) After learning, neurons with similar weights tend to cluster on the map

31 Semiconductors, BP&A Planning, 2003-01-2931 Conclusion Advantages SOM is Algorithm that projects high-dimensional data onto a two-dimensional map. The projection preserves the topology of the data so that similar data items will be mapped to nearby locations on the map. SOM still have many practical applications in pattern recognition, speech analysis, industrial and medical diagnostics, data mining –Disadvantages Large quantity of good quality representative training data required No generally accepted measure of ‘quality’ of a SOM e.g. Average quantization error (how well the data is classified)

32 Semiconductors, BP&A Planning, 2003-01-2932 Topologies (gridtop, hextop, randtop) pos = gridtop(2,3) pos = 0 1 0 1 0 1 0 0 1 1 2 2 plotsom (pos) pos = gridtop(3,2) pos = 0 1 0 1 0 1 0 0 1 1 2 2 plotsom (pos)

33 Semiconductors, BP&A Planning, 2003-01-2933 pos = gridtop(8,10); plotsom(pos)

34 Semiconductors, BP&A Planning, 2003-01-2934 pos = hextop(2,3) pos = 0 1.0000 0.5000 1.5000 0 1.0000 0 0 0.8660 0.8660 1.7321 1.7321

35 Semiconductors, BP&A Planning, 2003-01-2935 pos = hextop(3,2) pos = 0 1.0000 2.0000 0.5000 1.5000 2.5000 0 0 0 0.8660 0.8660 0.8660 plotsom(pos)

36 Semiconductors, BP&A Planning, 2003-01-2936 pos = hextop(8,10); plotsom(pos)

37 Semiconductors, BP&A Planning, 2003-01-2937 pos = randtop(2,3) pos = 0 0.7787 0.4390 1.0657 0.1470 0.9070 0 0.1925 0.6476 0.9106 1.6490 1.4027

38 Semiconductors, BP&A Planning, 2003-01-2938 pos = randtop(3,2) pos = 0 0.7787 1.5640 0.3157 1.2720 2.0320 0.0019 0.1944 0 0.9125 1.0014 0.7550

39 Semiconductors, BP&A Planning, 2003-01-2939 pos = randtop(8,10); plotsom(pos)

40 Semiconductors, BP&A Planning, 2003-01-2940 Distance Funct. (dist, linkdist, mandist, boxdist) pos2 = [ 0 1 2; 0 1 2] pos2 = 0 1 2 D2 = dist(pos2) D2 = 0 1.4142 2.8284 1.4142 0 1.4142 2.8284 1.4142 0

41 Semiconductors, BP&A Planning, 2003-01-2941

42 Semiconductors, BP&A Planning, 2003-01-2942 pos = gridtop(2,3) pos = 0 1 0 1 0 1 0 0 1 1 2 2 plotsom(pos) d = boxdist(pos) d = 0 1 1 1 2 2 1 0 1 1 2 2 1 1 0 1 1 1 1 1 1 0 1 1 2 2 1 1 0 1 2 2 1 1 1 0

43 Semiconductors, BP&A Planning, 2003-01-2943 pos = gridtop(2,3) pos = 0 1 0 1 0 1 0 0 1 1 2 2 plotsom(pos) d=linkdist(pos) d = 0 1 1 2 2 3 1 0 2 1 3 2 1 2 0 1 1 2 2 1 1 0 2 1 2 3 1 2 0 1 3 2 2 1 1 0

44 Semiconductors, BP&A Planning, 2003-01-2944 The Manhattan distance between two vectors x and y is calculated as D = sum(abs(x-y)) Thus if we have W1 = [ 1 2; 3 4; 5 6] W1 = 1 2 3 4 5 6 and P1= [1;1] P1 = 1 then we get for the distances Z1 = mandist(W1,P1) Z1 = 1 5 9

45 Semiconductors, BP&A Planning, 2003-01-2945 A One-dimensional Self-organizing Map angles = 0:2*pi/99:2*pi; P = [sin(angles); cos(angles)]; plot(P(1,:),P(2,:),'+r')

46 Semiconductors, BP&A Planning, 2003-01-2946 net = newsom([-1 1;-1 1],[30]); net.trainParam.epochs = 100; net = train(net,P); plotsom(net.iw{1,1},net.layers{1}.distances) The map can now be used to classify inputs, like [1; 0]: Either neuron 1 or 10 should have an output of 1, as the above input vector was at one end of the presented input space. The first pair of numbers indicate the neuron, and the single number indicates its output. p = [1;0]; a = sim (net, p) a = (1,1) 1

47 Semiconductors, BP&A Planning, 2003-01-2947 x = -4:0.01:4 P = [x;x.^2]; plot(P(1,:),P(2,:),'+r') net = newsom([-10 10;0 20],[10 10]); net.trainParam.epochs = 100; net = train(net,P); plotsom(net.iw{1,1},net.layers{1}.distances)

48 Semiconductors, BP&A Planning, 2003-01-2948 Questions? Suggestions ?

49 Semiconductors, BP&A Planning, 2003-01-2949


Download ppt "Semiconductors, BP&A Planning, 2003-01-291 DREAM PLAN IDEA IMPLEMENTATION."

Similar presentations


Ads by Google