Chrisantha Fernando & Sampsa Sojakka

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Introduction to Neural Networks
Université du Québec École de technologie supérieure Face Recognition in Video Using What- and-Where Fusion Neural Network Mamoudou Barry and Eric Granger.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Active Appearance Models
Beyond Linear Separability
NEU Neural Computing MSc Natural Computation Department of Computer Science University of York.
Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Introduction to Neural Networks Computing
G53MLE | Machine Learning | Dr Guoping Qiu
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
HMAX Models Architecture Jim Mutch March 31, 2010.
Artificial Spiking Neural Networks
Functional Link Network. Support Vector Machines.
1Neural Networks B 2009 Neural Networks B Lecture 1 Wolfgang Maass
Lecture 14 – Neural Networks
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
To Understand, Survey and Implement Neurodynamic Models By Farhan Tauheed Asif Tasleem.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
The Implicit Mapping into Feature Space. In order to learn non-linear relations with a linear machine, we need to select a set of non- linear features.
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Artificial Neural Networks
LOGO Classification III Lecturer: Dr. Bo Yuan
Radial-Basis Function Networks
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Quantum Algorithms for Neural Networks Daniel Shumow.
Self-Organized Recurrent Neural Learning for Language Processing April 1, March 31, 2012 State from June 2009.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Neural mechanisms of Spatial Learning. Spatial Learning Materials covered in previous lectures Historical development –Tolman and cognitive maps the classic.
Image recognition using analysis of the frequency domain features 1.
The search for organizing principles of brain function Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas.
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
NEURAL NETWORKS FOR DATA MINING
Basics of Neural Networks Neural Network Topologies.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
From brain activities to mathematical models The TempUnit model, a study case for GPU computing in scientific computation.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Skeleton Based Action Recognition with Convolutional Neural Network
Computer Architecture Lecture 26 Past and Future Ralph Grishman November 2015 NYU.
It’s raining outside; want to go to the pub? It’s dry outside; want to go to the pub? Sure; I’ll grab the umbrella. What, are you insane? I’ll grab the.
Distributed Pattern Recognition System, Web-based by Nadeem Ahmed.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
IJCNN, July 27, 2004 Extending SpikeProp Benjamin Schrauwen Jan Van Campenhout Ghent University Belgium.
Ghent University Compact hardware for real-time speech recognition using a Liquid State Machine Benjamin Schrauwen – Michiel D’Haene David Verstraeten.
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
9 Distributed Population Codes in Sensory and Memory Representations of the Neocortex Matthias Munk Summarized by Eun Seok Lee Biointelligence Laboratory,
Ch 7. Computing with Population Coding Summarized by Kim, Kwonill Bayesian Brain: Probabilistic Approaches to Neural Coding P. Latham & A. Pouget.
Chapter 13 Artificial Intelligence. Artificial Intelligence – Figure 13.1 The Turing Test.
Ghent University Backpropagation for Population-Temporal Coded Spiking Neural Networks July WCCI/IJCNN 2006 Benjamin Schrauwen and Jan Van Campenhout.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao Summary by B.-H. Kim Biointelligence Lab School of.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Auburn University
Learning in Neural Networks
Intelligent Information System Lab
How Neurons Do Integrals
Computer Science Department Brigham Young University
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Artificial Intelligence Lecture No. 28
Digital Systems: Hardware Organization and Design
Introduction to Neural Network
An Illustrative Example.
Teaching Recurrent NN to be Dynamical Systems
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Chrisantha Fernando & Sampsa Sojakka The Liquid Brain Chrisantha Fernando & Sampsa Sojakka

Motivations Only 30,000 genes, ≈1011 neurons Attractor neural networks, Turing machines Problems with classical models Often depend on synchronization by a central clock Particular recurrent circuits need to be constructed for each task Recurrent circuits often unstable and difficult to regulate Lack parallelism Real organisms cannot wait for convergence to an attractor Wolfgang Maass’ invented the Liquid State Machine (a model of the cortical microcircuit) in which he viewed the network as a liquid (or liquid-like dynamical system).

Liquid State Machine (LSM) Maass’ LSM is a spiking recurrent neural network which satisfies two properties Separation property (liquid) Approximation property (readout) LSM features Only attractor is rest Temporal integration Memoryless linear readout map Universal computational power: can approximate any time invariant filter with fading memory It also does not require any a-priori decision regarding the ``neural code'' by which information is represented within the circuit. Real-time computation using liquid metaphor, I.e. although only one attractor state (rest), liquid represents past inputs with unbiased analog fading memory. Liquid must be sensitive to saliently different inputs but non-chaotic (separation property). Limitations on the computational capabilities of real liquids imposed by their time-constants, strictly local interactions, homogeneity of the elements of the liquid.” So Maass develops “Liquids” consisting of Spiking Neurons. Large variety of mechanisms and time-constants. Recurrent connections on multiple spatial scales. He demonstrates real-time UCP. Whereas Turing machines have universal computational power for off-line computation on (static) discrete inputs, LSMs have UCP for real-time computing with fading memory on analog functions in continuous time. The state-transition-function of the LSM is task-independent (“found circuitry”) and the readout is memory-less. The LSM only has to satisfy a “Separation Property” for a linear readout element to be able to make any discrimination, or map any input function. It also does not require any a-priori decision regarding the ``neural code'' by which information is represented within the circuit.

Maass’ Definition of the Separation Property The current state x(t) of the microcircuit at time t has to hold all information about preceding inputs. Approximation Property Readout can approximate any continuous function f that maps current liquid states x(t) to outputs v(t).

We took the metaphor seriously and made the real liquid brain shown below. WHY?

BECAUSE. Real water is computationally efficient. Maass et al. used a small recurrent network of leaky integrate-and-fire neurons But it was computationally expensive to model. And I had to do quite a bit of parameter tweaking. Exploits real physical properties of water. Simple local rules, complex dynamics. Potential for parallel computation applications. Educational aid, demonstration of a physical representation that does computation. Contributes to current work on computation in non-linear media, e.g. Adamatsky, Database search.

Pattern Recognition in a Bucket 8 motors, glass tray, overhead projector Web cam to record footage at 320x240, 5fps Frames Sobel filtered to find edges and averaged to produce 700 outputs 50 perceptrons in parallel trained using the p-delta rule

Experiment 1: The XOR Problem.

2 motors, 1 minute footage of each case, 3400 frames Readouts could utilize wave interference patterns

Can Anyone Guess How it Works?

Experiment 2 : Speech Recognition

Each sample to drive motors for 4 seconds, one after the other Objective: Robust spatiotemporal pattern recognition in a noisy environment 20+20 samples of 12kHz pulse-code modulated wave files (“zero” and “one”), 1.5-2 seconds in length Short-Time Fourier transform on active frequency range (1-3000Hz) to create a 8x8 matrix of inputs from each sample (8 motors, 8 time slices) Each sample to drive motors for 4 seconds, one after the other Hopfield and Brody experiments showed that transient synchrony of the action potentials of a group of spiking neurons can be used to signal recognition of a space-time pattern across the inputs of those neurons – we show that water can produce this Sound files had to be pre-processed due to small time constant of relaxation of liquid (time window of 3-4sec), limited number of inputs motors, resolution of the camera Limitations in motor frequency response (could be changed only every 0.5 sec) hence 8 time slices (sliding window size set to the closest power of 2 of the sample length) Lot of noise in inputs: variation in amplitude, intonation

Zero One

Generalisation poor (~35% error) Overtraining? Training set was very large Local minima in a linear readout corresponds to the global minima (no hidden nodes!) – from support vector machine research We have many sources of error though All sound samples effectively different (intonation, amplitude, timing, intensity) Sounds input into the water sequentially -> sequence leaves residue in liquid state Motor frequencies fluctuated widely (drive shafts deteriorated with use) Movement of motors / camera / tank Camera frame rate varied No attempts were made to remove any of the noise

Analysis

Conclusion Properties of a natural dynamical system (water) can be harnessed to solve non-linear pattern recognition problems. Set of simple linear readouts suffice. No tweaking of parameters required. Further work will explore neural networks which exploit the epigenetic self-organising physical properties of materials.

Acknowledgements Inman Harvey Phil Husbands Ezequiel Di Paolo Emmet Spier Bill Bigge Aisha Thorn, Hanneke De Jaegher, Mike Beaton. Sally Milwidsky