Presentation is loading. Please wait.

Presentation is loading. Please wait.

Modular Neural Networks: SOM, ART, and CALM Jaap Murre University of Amsterdam University of Maastricht

Similar presentations


Presentation on theme: "Modular Neural Networks: SOM, ART, and CALM Jaap Murre University of Amsterdam University of Maastricht"— Presentation transcript:

1 Modular Neural Networks: SOM, ART, and CALM Jaap Murre University of Amsterdam University of Maastricht jaap@murre.com http://www.neuromod.org

2 Modular neural networks Why modularity? Kohonen’s Self-Organizing Map (SOM) Grossberg’s Adaptive Resonance Theory Categorizing And Learning Module, CALM (Murre, Phaf, & Wolters, 1992)

3 L..C...A...P..B LAPCAPCAB Modularity: limitations on connectivity

4 Modularity Scalability Re-use in design and evolution Coarse steering of development; learning provides fine structure Improved generalization because of fewer connections Strong evidence from neurobiology

5 Self-Organizing Maps (SOMs) Topological Representations

6 Map formation in the brain Topographic maps omnipresent in the sensory regions of the brain –retinotopic maps: neurons ordered as the locations of their visual field on the retina –tonotopic maps: neurons ordered according to tone for which they are sensitive –maps in somatosensory cortex: neurons ordered according to body part for which they are sensitive –maps in motor cortex: neurons ordered according to muscles they control

7 Auditory cortex has a tonotopic map that is hidden in the transverse temporal gyrus

8 Somatosensory maps

9 Somatosensory maps II © Kandel, Schwartz & Jessell, 1991

10 Many maps show continued plasticity Reorganization of sensory maps in primate cortex

11 Kohonen maps Teuvo Kohonen was the first to show how maps may develop Self-Organizing Maps (SOMs) Demonstration: the ordering of colors (colors are vectors in a 3-dimensional space of brightness, hue, saturation).

12 Kohonen algorithm Finding the activity bubble Updating the weights for the nodes in the active bubble

13 Finding the activity bubble Lateral inhibition

14 Finding activity bubble II Find the winner Activate all nodes in the neighbourhood of the winner

15 Updating the weights Move weight vector of winner towards the input vector Do the same for the active neighbourhood nodes  weight vectors of neigboring nodes will start resembling each other

16 Simplest implementation Weight vectors & input patterns all have length 1 (e.i.,  (w ij ) 2 = 1 ) Find node whose weight vector has mimimal distance to the input vector: min.  (a j - w ij ) 2 Activate all nodes in neighbourhood radius N t Update weights of active nodes by moving weights towards the input vector:  w ij =  t * ( a j - w ij ) w ij (t+1) = w ij (t) +  t * ( a j - w ij (t) )

17 Results of Kohonen © Kohonen, 1982

18 Influence of neighbourhood radius © Kohonen, 1982 Larger neighbourhood size leads to faster learning

19 Results II: the phonological typewriter © Kohonen, 1988 humpplia (Finnish)

20 Conclusions for SOM Elegant Prime example of unsupervised learning Biologically relevant and plausible Very good at discovering structure: –discovering categories –mapping the input onto a topographic map

21 Adaptive Resonance Theory (ART) Stephen Grossberg (1976)

22 Grossberg’s ART Stability-Plasticity Dilemma How to disentangle overlapping patterns?

23 ART-1 Network

24 Phases of Classification (a) Initial pattern (b) Little support from F2 (c) Reset: second try starts (d) Different category (F2) node gives sufficient support: resonance

25 Categorizing And Learning Module (CALM) Murre, Phaf, and Wolters (1992)

26 CALM: Categorizing And Learning Module CALM module is basic unit in multi- modular networks Categorizes arbitrary input activation patterns and retains this categorization over time CALM is developed for unsupervised learning but also works with supervision Motivated by psychological, biological, and practical considerations

27 Important design principles in CALM Modularity Novelty dependent categorization and learning Wiring scheme inspired by neocortical minicolumn

28 Elaboration versus activation Novelty dependent categorization and learning derived from memory psychology (Graf and Mandler, 1984) –Elaboration learning: Active formation of new associations –Activation learning: Passive strengthening of pre-existing associations In CALM: Relative novelty of patterns determines either type of learning

29 How elaboration learning is implemented in CALM Novel pattern –> Much competition –> High activation of Arousal Node –> High activation of External Node –> High learning parameter –> High noise amplitude on Representation Nodes Elaboration learning drives: –Self-induced noise –Self-induced learning

30 Self-induced noise (cf. Bolzmann Machine) Non-specific activations from sub-cortical structures in cortex Optimal level of arousal for optimal learning performance (Yerkes-Dodson Law) Noise drives search for new representations Noise breaks symmetry deadlocks Noise may lead to convergence in deeper attractors

31 Self-induced learning Possible role of hippocampus and basal forebrain (cf. modulatory system in TraceLink) Shift from implicit to explicit memory Remedy of the Plasticity-Stability Dilemma

32 Stability-Plasticity Dilemma or the Problem of Real-Time Learning How can a learning system be designed to remain plastic, or adaptive, in response to significant events and yet remain stable in response to irrelevant events?” Carpenter and Grossberg, 1988, p.77)

33 Novelty dependent categorization Novel patterns implies search for new representations Search process is driven by novelty dependent noise

34 Novelty dependent learning Novel pattern: increased learning rate Old pattern: base-rate learning

35 Learning rule derived from Grossberg’s ART Extension of the Hebb Rule Increases and decreases in weight Only applied to excitatory connections (no sign changes allowed) Weights are bounded between 0 and 1 Allows separation of complex patterns from their composing subpatterns In contrast to ART: weight change is influenced by weighed neighbor activations

36 CALM Learning Rule Weight from node j to node i Neighbor activations k dampen the weight change

37 Learning rule

38 Avoid neurobiologically implausible architectures Random organization of excitatory and inhibitory connections Learning may change a connections sign Single nodes may give off both excitatory and inhibitory connections

39 Neurons form a dichotomy (Dale’s Law) Neurons involved in long-range connections in cortex give off excitatory connections Inhibitory neurons in cortex are inhibitory

40 CALM: Categorizing And Learning Module By Murre, Phaf, & Wolters (1992)

41 Activation rule

42 ParameterCALM Up weight0.5 Down weight-1.2 Cross weight10.0 Flat weight-1.0 High weight-0.6 Low weight0.4 AE weight1.0 ER weight0.25 wµE0.05 k0.05 K1.0 L1.0 d0.01 Parameters Possible parameters for the CALM module They do not need to adjusted for each new architecture

43 Main processes in the CALM module

44 Inhibition between nodes Example: inhibition in CALM


Download ppt "Modular Neural Networks: SOM, ART, and CALM Jaap Murre University of Amsterdam University of Maastricht"

Similar presentations


Ads by Google