2 Distributed Representations In the examples today, nnets learn to represent the information in a training set by distributing it across a set of simple connected neuron-like units.Some useful properties of distributed representations:they are robust to noisethey degrade gracefullythey are content addressablethey allow automatic completion or repair of input patternsthey allow automatic generalisation from input patternsthey allow automatic clustering of input patternsIn many ways, this form of information processing resembles that carried out by real nervous systems.
3 Distributed Representations However, distributed representations are quite hard for us to understand, visualise or build by hand.To aid our understanding we have developed ideas such as:the partitioning of the input spacethe clustering of the input datathe formation of feature detectorsthe characterisation of hidden unit subspacesetc.To build distributed representations automatically, we resorted to learning algorithms such as backprop.
4 Problems ANNs often depart from biological reality: Supervision: Real brains cannot rely on a supervisor to teach them, nor are they free to self-organise.Training vs. Testing: This distinction is an artificial one.Temporality: Real brains are continuously engaged with their environment, not exposed to a series of disconnected “trials”.Architecture: Real neurons and the networks that they form are far more complicated than the artificial neurons and simple connectivity that we have discussed so far.Does this matter? If ANNs are just biologically inspired tools, no, but if they are to model mind or life-like systems, the answer is maybe.
5 Some ways forward… Learning: eliminate supervision Architecture: eliminate layers & feed-forward directionalityTemporality: Introduce dynamics into neural networks.
6 Auto-associative Memory Auto-associative nets are trained to reproduce their input activation across their output nodes…bed+bathinputOnce trained, the net can automatically repair noisy or damaged images that are presented to it…hidden…+mirror+wardrobemwbsoutputA net trained on bedrooms and bathrooms, presented with an input including a sink and a bed might infer the presence of a mirror and a wardrobe – a bedsit.
7 Auto-associationThe networks still rely on a feed-forward architectureTraining still (typically) relies on back-propagationBut… this form of learning presents a move away from conventional supervision since the “desired” output is none other than the input which can be stored internally.
8 A Vision ApplicationHubert & Wiesel’s work on cat retinas has inspired a class of ANNs that are used for sophisticated image analysis.Neurons in the retina are arranged in large arrays, and each has its own associated receptive field. This arrangement together with “lateral inhibition” enable the eye to efficiently perform edge detection of our visual input streams.An ANN can do the same thing
9 Lateral inhibitionA pattern of light falls across an array of neurons that each inhibit their right-hand neighbour.Only neurons along the left-hand dark-light boundary escape inhibition.Lateral inhibition such as this is characteristic of natural retinal networks.Now let the receptive fields of different neurons be coarse grained, with large overlaps between adjacent neurons.What advantage is gained?
10 Lateral inhibition The networks still rely on one-way connectivity But there are no input, hidden and output layers. Every neuron serves both for input and for output.Still, the learning is not very interesting: lateral inhibition is hard-wired up to fine tuning. While some applications support hard-wired circuitry, others require more flexible functionality.
11 Unsupervised Learning Autoassociative learning: A first but modest move away from supervision.Reinforcement learning: In many real-life situations, we have no idea what the “desired” output of the neurons should be, but can recognise desired behaviour (e.g. riding a bike). By conditioning behaviour with rewards and penalties, desirable neural net activity is reinforced.Hebbian learning and self-organisation: In the complete absence of supervision or conditioning, the network can still self-organise and reach an appropriate and stable solution. In Hebbian learning, only effective connections between neurons are enhanced.
12 Supervised trainingExternal learning rulesExternal supervisorOff-line traininga general-purpose net (that has been trained to do almost any sort of classification, recognition, fitting, association, and much more)Artificial nets can be configured to perform set tasks. The training usually involves:The result is typically:Can we design and train artificial neural nets to exhibit a more natural learning capacity?
13 Natural learning A natural learning experience typically implies: Internalised learning rulesAn ability to learn by one’s self… in the real world…without conscious control over neuronal plasticitya restricted learning capacity:We are primed to study a mother tongue, but less so to study math.A natural learning experience typically implies:The result is typically:
14 The life of a squirrelAnimals tasks routinely require them to combine reactive behaviour (such as reflexes, pattern recognition, etc.), sequential behaviour (such as set routines) and learning.For example, a squirrel foraging for food mustmove around her territory without injuring herselfidentify dangerous or dull areas or those rich in foodlearn to travel to and from particular areasAll of these behaviours are carried out by one nervous system – the squirrel’s brain. To some degree, different parts of the brain may specialise in different kinds of task. However, there is no sharp boundary between learned behaviours, instinctual behaviours and reflex behaviours.
15 Recurrent Neural Networks Activation flows around the network, rather than feeding forward.Randy Beer describes one scheme for recurrent nets:Each neuron is connected to each other neuron and to itself.Connections can be asymmetric.outputSome neurons produce outputs.inputSome neurons receive external input.RNN neurons are virtually the same as in feed-forward nets, but the activity of the network is updated at each time step. Note: inputs from other nodes as well as one’s self must be counted .Adapted from Randy Beer (1995) “A dynamical systems perspective on agent-environment interaction”, Artificial Intelligence 72:
16 The Net’s Dynamic Character Consider the servile life of a feed-forward net:it is dormant until an input is providedthis is mapped onto the output nodes via a hidden layerweights are changed by an auxiliary learning algorithmonce again the net is dormant, awaiting inputContrast the active life of a recurrent net:Even without input, spontaneous activity may reverberate around the net in any manner.this spontaneous activity is free to flow around the netexternal input may modify these intrinsic dynamicsif embedded, the net’s activity may affect its environment, which may alter its ‘sensory’ input, which may perturb its dynamics, and so on…
17 What does it do?Initially, in the absence of input, or in the presence of a steady-state input, a recurrent network will usually approach a stable equilibrium.Other behaviours can be obtained with dynamic inputs and induced by training. For instance, a recurrent net can be trained to oscillate spontaneously (without any input), and in some cases even to generate chaotic behaviour.One of the big challenges in this area is finding the best algorithms and network architectures to induce such diverse forms of dynamics.
18 Tuning the dynamicsOnce input is included, there is a fear that the abundance of internal stimulations and excitations will result in an explosion or saturation of activity.In fact by including a balance of excitations and inhibitions, and by correctly choosing the activation functions, the activity is usually self-contained within reasonable bounds.
19 Dynamical Neural Nets Introducing time In all neural net discussions so far, we have assumed all inputs to be presented simultaneously, and each trial to be separate. Time was somehow deemed irrelevant.Recurrent nets can deal with inputs that are presented sequentially, as they would almost always be in real problems. The ability of the net to reverberate and sustain activity can serve as a working memory. Such nets are called Dynamical Neural Nets (DNN or DRNN).
20 Dynamical Neural Nets Consider an XOR with only one input node Input: We provide the network with a time series consisting of a pair of high and low values.inputtime1target outputOutput:The output neuron is to become active when the input sequence is 01 or 10, but remain inactive when the input sequence is 00 or 11.
21 Supervised learning for RNNs Backprop through Time:Calculate errors at output nodes at time tBackpropagate the error to all nodes at time t-1repeat for some fixed number of time steps (usually<10)Apply usual weight fixing formulaReal Time Recurrent Learning:Calculate errors at output nodesNumerically seek steepest descent solution to minimise the error at each time step.Both methods deteriorate with history: The longer the history, the harder the training. However, with slight variations, these learning algorithms have successfully been used for a variety of applications.Examples: grammar learning &distinguishing between spoken languages.
22 RNNs for Time Series Prediction Predicting the future is one of the biggest quests of human kind. What will the weather bring tomorrow? Is there global warming? When will Wall Street crash and how will oil prices fluctuate in the coming months? Can EEG recordings be used to predict the next epilepsy attack or ECG, the next heart attack?Such daunting questions have occupied scientists for centuries and computer scientists since time immemorial.Another example dates back to Edmund Halley’s observation in 1676 that Jupiter’s orbit was directed slowly towards the sun. If true, Jupiter would sweep the inner planets with it into the sun (that’s us). The hypothesis threw the entire mathematical elite into a frenzy. Euler, Lagrange and Lambert made heroic attacks on the problem without solving it. No wonder. The problem involved 75 simultaneous equations resulting in some 20 billion possible choices of parameters. It was finally solved (probabilistically) by Laplace in Jupiter, it turns out, was only oscillating. The first half cycle of oscillations will be completed at about 2012.
23 How does it work?A time series represents a process that is discrete or has been discretised in time.inputoutputxt124357...6The output represents a prediction based on information about the past.If the system can be described by a dynamical process, then a recurrent neural net should be able to model it. The question is how many data points are needed (i.e. how far into the past must we go) to predict the future.
24 Next time…More neural networks (self-organising nets, competitive nets, & more)Unsupervised learning algorithmsBio-inspired applicationsReadingRandy Beer (1995) “A dynamical systems perspective on agent-environment interaction”, Artificial Intelligence 72:In particular, much of today was based on
Your consent to our cookies if you continue to use this website.