Artificial Spiking Neural Networks

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Supervised Learning Recap
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Functional Link Network. Support Vector Machines.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Machine Learning Neural Networks
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Herding: The Nonlinear Dynamics of Learning Max Welling SCIVI LAB - UCIrvine.
Stable Propagation of Synchronous Spiking in Cortical Neural Networks Markus Diesmann, Marc-Oliver Gewaltig, Ad Aertsen Nature 402: Flavio Frohlich.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Neural Networks Marco Loog.
Machine Learning Neural Networks.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Chapter Seven The Network Approach: Mind as a Web.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Artificial Neural Networks
Learning Programs Danielle and Joseph Bennett (and Lorelei) 4 December 2007.
CSC2535: 2013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
The free-energy principle: a rough guide to the brain? K Friston Summarized by Joon Shik Kim (Thu) Computational Models of Intelligence.
Artificial Neural Networks
EM and expected complete log-likelihood Mixture of Experts
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Lecture 10: Mean Field theory with fluctuations and correlations Reference: A Lerchner et al, Response Variability in Balanced Cortical Networks, q-bio.NC/ ,
NEURAL NETWORKS FOR DATA MINING
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
CIAR Summer School Tutorial Lecture 1b Sigmoid Belief Nets Geoffrey Hinton.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Why Can't A Computer Be More Like A Brain?. Outline Introduction Turning Test HTM ◦ A. Theory ◦ B. Applications & Limits Conclusion.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Spiking Neural Networks Banafsheh Rekabdar. Biological Neuron: The Elementary Processing Unit of the Brain.
Chapter 6 Neural Network.
IJCNN, July 27, 2004 Extending SpikeProp Benjamin Schrauwen Jan Van Campenhout Ghent University Belgium.
CSC2535 Lecture 5 Sigmoid Belief Nets
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Bayesian Perception.
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Ghent University Backpropagation for Population-Temporal Coded Spiking Neural Networks July WCCI/IJCNN 2006 Benjamin Schrauwen and Jan Van Campenhout.
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao Summary by B.-H. Kim Biointelligence Lab School of.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Neural Networks.
Artificial Neural Networks
Learning with Perceptrons and Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Intelligent Information System Lab
Data Mining Lecture 11.
Artificial neural networks (ANNs)
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Machine Learning Today: Reading: Maria Florina Balcan
of the Artificial Neural Networks.
XOR problem Input 2 Input 1
CSE 573 Introduction to Artificial Intelligence Neural Networks
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
The free-energy principle: a rough guide to the brain? K Friston
Lecture Notes for Chapter 4 Artificial Neural Networks
Artificial Neural Networks
实习生汇报 ——北邮 张安迪.
The Network Approach: Mind as a Web
EE 193/Comp 150 Computing with Biological Parts
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

Artificial Spiking Neural Networks Sander M. Bohte CWI Amsterdam The Netherlands

Overview From neurones to neurons Artificial Spiking Neural Networks (ASNN) Dynamic Feature Binding Computing with spike-times Neurons-to-neurones Computing graphical models in ASNN Conclusion

Of neurones and neurons Artificial Neural Networks (neuro)biology -> Artificial Intelligence (AI) Model of how we think the brain processes information New data on how the brain works! Artificial Spiking Neural Networks

Real Neurons Real cortical neurons communicate with spikes or action potentials

Real Neurons The artificial sigmoidal neuron models the rate at which spikes are generated artificial neuron computes function of weighted input:

Artificial Neural Networks Artificial Neural Networks can: approximate any function (Multi-Layer Perceptrons) act as associative memory (Hopfield networks, Sparse Distributed Memory) learn temporal sequences (Recurrent Neural Networks)

ANN’s BUT.... for AI neural networks are not competitive classification/clustering ... or not suitable structured learning/representation (“binding” problem, e.g. grammar) and scale poorly networks of networks of networks... for understanding the brain the neuron model is wrong individual spikes are important, not just rate

Dynamic Feature Binding “bind” local features into coherent percepts:

Binding representing multiple objects? like language without grammar! (i.e. no predicates)

Binding Conjunction coding:

Binding Synchronizing spikes?

New Data! neurons belonging to same percept tend to synchronize (Gray & Singer, Nature 1987) timing of (single) spikes can be remarkably reproducible fly: same stimulus (movie) same spike ± < 1ms Spikes are rare: average brain activity < 1Hz “rates” are not energy efficient

Computing with Spikes Computing with precisely timed spikes is more powerful than with “rates”. (VC dimension of spiking neuron models) [W. Maass and M. Schmitt., 1999] Artificial Spiking Neural Networks?? [W. Maass Neural Networks, 10, 1997]

Artificial Spiking Neuron The “state” (= membrane potential) is a weighted sum of impinging spikes spike generated when potential crosses threshold, reset potential

Artificial Spiking Neuron Spike-Response Model: where ε(t) is the kernel describing how a single spike changes the potential:

Artificial Spiking Neural Network Network of spiking neurons:

Error-backpropagation in ASNN Encode “X-OR” in (relative) spike-times

XOR in ASNN Change weights according to gradient descent using error-backpropagation (Bohte etal, Neurocomputing 2002) Also effective for unsupervised learning (Bohte etal, IEEE Trans Neural Net. 2002)

Computing Graphical Models What kind of intelligent computing can we do? recent work: computing Hidden Markov Models in noisy recurrent ASNN (Rao, NIPS 2004, Zemel etal, NIPS 2004)

From Neurons to Neurones artificial spiking neurons are fairly accurate model of real neurons learning rules -> predictions for real neuronal behavior example: reducing response variance in stochastic spiking neuron yields learning rule like biology (Bohte & Mozer, NIPS 2004)

STDP from variance reduction neurons fire stochastically as a function of membrane potential Good idea to minimize response variability: response entropy: gradient:

STDP? Spike-timing dependent plasticity:

Variance Reduction Simulate STDP experiment (Bohte&Mozer,2005): predicts dependence shape STDP -> neuron parameters

STDP -> ASNN Variance reduction replicates experimental results. Suggests: learning in ASNN based on (mutual) information maximization minimum description length (MDL) (based on similar entropy considerations) Suggests: new biological experiments

Hidden Markov Model Bayesian inference in simple single level (Rao, NIPS 2004): hidden state of model at time t

Let be the observable output at time t probability: forward component of belief propagation:

Bayesian SNN Recurrent spiking neural network:

Bayesian SNN Equivalence: SNN HMM

Bayesian SNN Current spike-rate: The probability of spiking is directly proportional to the posterior probability of the neuron’s preferred state and the current input given all past inputs Generalizes to Hierarchical Inference

Conclusion new neural networks: Artificial Spiking Neural Networks can do what traditional ANN’s can we are researching how to use these networks in more interesting ways many open directions: Bayesian inference / graphical models in ASNN MDL/information theory based learning distributed coding for binding problem in ASNN applying agent-based reward distribution ideas to scale learning in large neural nets