Presentation on theme: "Chrisantha Fernando & Sampsa Sojakka"— Presentation transcript:
1Chrisantha Fernando & Sampsa Sojakka The Liquid BrainChrisantha Fernando & Sampsa Sojakka
2Motivations Only 30,000 genes, ≈1011 neurons Attractor neural networks, Turing machinesProblems with classical modelsOften depend on synchronization by a central clockParticular recurrent circuits need to be constructed for each taskRecurrent circuits often unstable and difficult to regulateLack parallelismReal organisms cannot wait for convergence to an attractorWolfgang Maass’ invented the Liquid State Machine (a model of the cortical microcircuit) in which he viewed the network as a liquid (or liquid-like dynamical system).
3Liquid State Machine (LSM) Maass’ LSM is a spiking recurrent neural network which satisfies two propertiesSeparation property (liquid)Approximation property (readout)LSM featuresOnly attractor is restTemporal integrationMemoryless linear readout mapUniversal computational power: can approximate any time invariant filter with fading memoryIt also does not require any a-priori decision regarding the ``neural code'' by which information is represented within the circuit.Real-time computation using liquid metaphor, I.e. although only one attractor state (rest), liquid represents past inputs with unbiased analog fading memory.Liquid must be sensitive to saliently different inputs but non-chaotic (separation property).Limitations on the computational capabilities of real liquids imposed by their time-constants, strictly local interactions, homogeneity of the elements of the liquid.” So Maass develops “Liquids” consisting of Spiking Neurons.Large variety of mechanisms and time-constants.Recurrent connections on multiple spatial scales.He demonstrates real-time UCP. Whereas Turing machines have universal computational power for off-line computation on (static) discrete inputs, LSMs have UCP for real-time computing with fading memory on analog functions in continuous time.The state-transition-function of the LSM is task-independent (“found circuitry”) and the readout is memory-less. The LSM only has to satisfy a “Separation Property” for a linear readout element to be able to make any discrimination, or map any input function.It also does not require any a-priori decision regarding the ``neural code'' by which information is represented within the circuit.
4Maass’ Definition of the Separation Property The current state x(t) of the microcircuit at time t has to hold all information about preceding inputs.Approximation PropertyReadout can approximate any continuous function f that maps current liquid states x(t) to outputs v(t).
5We took the metaphor seriously and made the real liquid brain shown below. WHY?
6BECAUSE. Real water is computationally efficient. Maass et al. used a small recurrent network of leaky integrate-and-fire neuronsBut it was computationally expensive to model.And I had to do quite a bit of parameter tweaking.Exploits real physical properties of water.Simple local rules, complex dynamics.Potential for parallel computation applications.Educational aid, demonstration of a physical representation that does computation.Contributes to current work on computation in non-linear media, e.g. Adamatsky, Database search.
7Pattern Recognition in a Bucket 8 motors, glass tray, overhead projectorWeb cam to record footage at 320x240, 5fpsFrames Sobel filtered to find edges and averaged to produce 700 outputs50 perceptrons in parallel trained using the p-delta rule
21Each sample to drive motors for 4 seconds, one after the other Objective: Robust spatiotemporal pattern recognition in a noisy environment20+20 samples of 12kHz pulse-code modulated wave files (“zero” and “one”), seconds in lengthShort-Time Fourier transform on active frequency range (1-3000Hz) to create a 8x8 matrix of inputs from each sample (8 motors, 8 time slices)Each sample to drive motors for 4 seconds, one after the otherHopfield and Brody experiments showed that transient synchrony of the action potentials of a group of spiking neurons can be used to signal recognition of a space-time pattern across the inputs of those neurons – we show that water can produce thisSound files had to be pre-processed due to small time constant of relaxation of liquid (time window of 3-4sec), limited number of inputs motors, resolution of the cameraLimitations in motor frequency response (could be changed only every 0.5 sec) hence 8 time slices (sliding window size set to the closest power of 2 of the sample length)Lot of noise in inputs: variation in amplitude, intonation
26Generalisation poor (~35% error) Overtraining?Training set was very largeLocal minima in a linear readout corresponds to the global minima (no hidden nodes!) – from support vector machine researchWe have many sources of error thoughAll sound samples effectively different (intonation, amplitude, timing, intensity)Sounds input into the water sequentially -> sequence leaves residue in liquid stateMotor frequencies fluctuated widely (drive shafts deteriorated with use)Movement of motors / camera / tankCamera frame rate variedNo attempts were made to remove any of the noise
33ConclusionProperties of a natural dynamical system (water) can be harnessed to solve non-linear pattern recognition problems.Set of simple linear readouts suffice.No tweaking of parameters required.Further work will explore neural networks which exploit the epigenetic self-organising physical properties of materials.
34Acknowledgements Inman Harvey Phil Husbands Ezequiel Di Paolo Emmet SpierBill BiggeAisha Thorn, Hanneke De Jaegher, Mike Beaton.Sally Milwidsky