Bioinspired Computing Lecture 7 Alternative Neural Networks M. De Kamps, Netta Cohen.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Bioinspired Computing Lecture 16
Bioinspired Computing Lecture 14
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Outcomes  Look at the theory of self-organisation.  Other self-organising networks  Look at examples of neural network applications.
Kohonen Self Organising Maps Michael J. Watts
Unsupervised Learning with Artificial Neural Networks The ANN is given a set of patterns, P, from space, S, but little/no information about their classification,
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen.
Neural Networks Basic concepts ArchitectureOperation.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
COGNITIVE NEUROSCIENCE
Chapter Seven The Network Approach: Mind as a Web.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Lecture 09 Clustering-based Learning
Machine Learning. Learning agent Any other agent.
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Modular Neural Networks: SOM, ART, and CALM Jaap Murre University of Amsterdam University of Maastricht
Self-Organising Networks This is DWC-lecture 8 of Biologically Inspired Computing; about Kohonen’s SOM, what it’s useful for, and some applications.
NEURAL NETWORKS FOR DATA MINING
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
Deriving connectivity patterns in the primary visual cortex from spontaneous neuronal activity and feature maps Barak Blumenfeld, Dmitri Bibitchkov, Shmuel.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
An Unsupervised Connectionist Model of Rule Emergence in Category Learning Rosemary Cowell & Robert French LEAD-CNRS, Dijon, France EC FP6 NEST Grant.
381 Self Organization Map Learning without Examples.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Chapter 5 Unsupervised learning
BIOPHYSICS 6702 – ENCODING NEURAL INFORMATION
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Lecture 22 Clustering (3).
Chapter 3. Artificial Neural Networks - Introduction -
Computational Intelligence: Methods and Applications
Feature mapping: Self-organizing Maps
The Network Approach: Mind as a Web
Unsupervised Networks Closely related to clustering
Presentation transcript:

Bioinspired Computing Lecture 7 Alternative Neural Networks M. De Kamps, Netta Cohen

2 Attractor networks: Two examples Jets and Sharks network –Weights set by hand –Demonstrates recall Generalisation Prototypes Graceful degradation Robustness Kohonen networks –Unsupervised learning: Self-Organizing Maps

3

4 +1

5 Dynamics o: output of a node act > 0: o = act act <=0: o = 0 act: activity of a node –i >0:Δa u = (max – a u )*i – decay*(a u -rest) –i <=0:Δa u = (a u - min)*i – decay*(a u -rest) i: input of a node –i u = 0.1Σw ui o i ext u

6 Jets and Sharks Units Weights (excitatory: +1; inhibitory -1) Activation -0.2, 1.0 –Resting activation: -0.1 Dynamics

7

8 +1

9 Activate “ART”

10 Jets and Sharks NameGangAgeEducationMar.Occupation ArtJets40sJ.HSing.Pusher AlJets30sJ.H.Mar.Burglar ClydeJets40sJ.HSing.Bookie MikeJets30sJ.H.Sing.Bookie PhilSharks30sCol.MarPusher DonSharks30sColMar.Burglar DaveSharks30sH.S.Div.Pusher

11 Properties Retrieving a name from other properties –Content Addressable Memory Categorisation and prototype formation –Activating sharks will activate person units of shark members –Phil is quintessential shark: 30s Pusher (wins out in the end!)

12 Activate “Shark”

13 Properties Can activate 20s and pusher and find persons who match best Robust –Graceful degradation –Noise Weight set by hand

14 Last time Biologically inspired associative memories moves away from bio- realistic model Unsupervised learning Working examples and applications Pros, Cons & open questions Today Attractor neural nets: SOM (Competitive) Nets Neuroscience applications GasNets. Robotic control Other Neural Nets

15 Spatial Codes Natural neural nets often code similar things close together. The auditory and visual cortex provide examples. Neural Material Low Freq. High Freq. Frequency Sensitivity Orientation Sensitivity Neural Material 0°359° Another example: touch receptors in the human body. "Almost every region of the body is represented by a corresponding region in both the primary motor cortex and the somatic sensory cortex" (Geschwind 1979:106). "The finger tips of humans have the highest density of receptors: about 2500 per square cm!" (Kandel and Jessell 1991:374). This representation is often dubbed the homunculus (or little man in the brain) Picture from

16 Input Nodes Fully Connected Output Pattern In a Kohonen net, a number of input neurons feed a single lattice of neurons. The output pattern is produced across the lattice surface. Lattice Kohonen Nets Large volumes of data are compressed using spatial/ topological relationships within the training set. Thus the lattice becomes an efficient distributed representation of the input.

17 Kohonen Nets Important features: Self-organisation of a distributed representation of inputs. This is a form of unsupervised learning: The underlying learning principle: competition among nodes known as “winner takes all”. Only winners get to “learn” & losers decay. The competition is enforced by the network architecture: each node has a self-excitatory connection and inhibits all its neighbours. Spatial patterns are formed by imposing the learning rule throughout the local neighbourhood of the winner. also known as self-organising maps (SOMs)

18 Training Self-Organising Maps A simple training algorithm might look like this: 1.Randomly initialise the network input weights 2.Normalise all inputs so they are size-independent 3.Define a local neighbourhood and a learning rate 4.For each item in the training set Find the lattice node most excited by the input Alter the input weights for this node and those nearby such that they more closely resemble the input vector, i.e., at each node, the input weight update rule is:  w = r (x-w) 5.Reduce the learning rate & the neighbourhood size 6.Goto 2 (another pass through the training set)

19 Training Self-Organising Maps (cont) Gradually the net self-organises into a map of the inputs, clustering the input data by recruiting areas of the net for related inputs or features in the inputs. The size of the neighbourhood roughly corresponds to the resolution of the mapped features.

20 How Does It Work? Imagine a 2D training set with clusters of data points Vertical Blue Red Horizontal The nodes in the lattice are initially randomly sensitive. Gradually, they will “migrate” towards the input data. Nodes that are neighbours in the lattice will tend to become sensitive to similar inputs. Effective resource allocation: dense parts of the input space recruit more nodes than sparse areas. Applet from Another example: The travelling salesman problemThe travelling salesman problem

21 The U-Matrix High dimensional clusters can only be visualized in 2D The U-height facilitates the recognition of clusters –Define a distance in weight space –For a given node n, take weight w n. For each neighbour of n, take weight w m Calculate d(w n,w m ) Add all of these distances for node n –This is the U-height of node n Colour the node map according to its U-height Clusters are areas of low U-height, separated by boundaries of high U-height

22 The U-Matrix

23 How does the brain perform classification? One area of the cortex (the inferior temporal cortex or IT) has been linked with two important functions: object recognition object classification These tasks seem to be shape/colour specific but independent of object size, position, relative motion or speed, brightness or texture. Indeed, category-specific impairments have been linked to IT injuries.

24 How does the brain perform classification (cont)? Questions: How do IT neurons encode objects/categories? e.g., local versus distributed representations/coding temporal versus rate coding at the neuronal level Can we recruit ANNs to answer such questions? Can ANNs perform classification as well given similar data? Recently, Elizabeth Thomas and colleagues performed experiments on the activity of IT neurons during an exercise of image classification in monkeys and used a Kohonen net to analyse the data.

25 The experiment Monkeys were trained to distinguish between a training set of pictures of trees and various other objects. The monkeys were considered trained when they reached a 95% success rate. Trained monkeys were now shown new images of trees and other objects. As they classified the objects, the activity in IT neurons in their brains was recorded. All in all 226 neurons were recorded on various occasions and over many different images. The data collected was the mean firing rate of each neuron in response to each image. 25% of neurons responded only to one category, but 75% were not category specific. All neurons were image-specific. Problem: Not all neurons were recorded for all images & No images were tested across all neurons. In fact, when a Table of neuronal responses for each image was created, it was more than 80% empty. E. Thomas et al, J. Cog. Neurosci. (2001)

26 Experimental Results Question: Given the partial data, is there sufficient information to classify images as trees or non-trees? Answer: A 2-node Kohonen net trained on the Table of neuronal responses was able to classify new images with an 84% success rate. E. Thomas et al, J. Cog. Neurosci. (2001) Question: Are categories encoded by category-specific neurons? Answer: Delete data of category-specific neuron responses from Table. The success rate of the Kohonen net was degraded but only minimally. A control set with random data deletions yielded similar results. Conclusion: Category- specific neurons are not important for categorisation!

27 Experimental Results (cont.) E. Thomas et al, J. Cog. Neurosci. (2001) Conclusions: The IT employs a distributed representation to encode categories of different images. The redundancy in this encoding allows for graceful degradation so that even with 80% of data missing and many neurons deleted, sufficient information is present for classification purposes. The fact that only rate information was used suggests that temporal information is less important here. Question: Which neurons are important, if any? Answer: An examination of the weights that contribute most to the output in the Kohonen net revealed that a small subset of neurons (<50) that are not category-specific yet respond with different intensities to different categories are crucial for correct classification.

28 From Biology to ANNs & Back Neuroscience and studies of animal behaviour have led to new ideas for artificial learning, communication, cooperation & competition. Simplistic cartoon models of these mechanisms can lead to new paradigms and impressive technologies. Dynamic Neural Nets are helping us understand real-time adaptation and problem-solving under changing conditions. Hopfield nets shed new insight on mechanisms of association and the benefits of unsupervised learning. Thomas’ work helps unravel coding structures in the cortex.

29 Next time… Reading Elizabeth Thomas et al (2001) “Encoding of categories by noncategory- specific neurons in the inferior temporal cortex”, J. Cog. Neurosci. 13: Phil Husbands, Tom Smith, Nick Jakobi & Michael O’Shea (1998). “Better living through chemistry: Evolving GasNets for robot control”, Connection Science, 10: Ezequiel Di Paolo (2003). Organismically-inspired robotics: Homeostatic adaptation and natural teleology beyond the closed sensorimotor loop, in: K. Murase & T. Asakura (Eds) Dynamical Systems Approach to Embodiment and Sociality, Advanced Knowledge International., Adelaide, pp Ezequiel Di Paolo (2000) “Homeostatic adaptation to inversion of the visual field and other sensorimotor disruptions”, SAB2000, MIT Press. Hopfield networks