Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Chapter 8: Adaptive Networks
Neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
CS 388: Natural Language Processing: Neural Networks
The Gradient Descent Algorithm
Neural Networks CS 446 Machine Learning.
Classification with Perceptrons Reading:
Soyoun Kim, Jaewon Hwang, Daeyeol Lee  Neuron 
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Volume 92, Issue 5, Pages (December 2016)
Volume 97, Issue 6, Pages e5 (March 2018)
Artificial Neural Network & Backpropagation Algorithm
Predicting Every Spike
The Brain as an Efficient and Robust Adaptive Learner
Presynaptic Self-Depression at Developing Neocortical Synapses
Using a Compound Gain Field to Compute a Reach Plan
Coding of the Reach Vector in Parietal Area 5d
Volume 56, Issue 6, Pages (December 2007)
An Introduction To The Backpropagation Algorithm
Learning Precisely Timed Spikes
Generating Coherent Patterns of Activity from Chaotic Neural Networks
Volume 66, Issue 6, Pages (June 2010)
Carlos D. Brody, J.J. Hopfield  Neuron 
Shaul Druckmann, Dmitri B. Chklovskii  Current Biology 
Volume 87, Issue 1, Pages (July 2015)
Volume 40, Issue 6, Pages (December 2003)
Volume 86, Issue 1, Pages (April 2015)
Banburismus and the Brain
Confidence as Bayesian Probability: From Neural Origins to Behavior
A Role for the Superior Colliculus in Decision Criteria
Linking Memories across Time via Neuronal and Dendritic Overlaps in Model Neurons with Active Dendrites  George Kastellakis, Alcino J. Silva, Panayiota.
Volume 95, Issue 1, Pages e3 (July 2017)
Dynamic Coding for Cognitive Control in Prefrontal Cortex
Adaptation without Plasticity
Thomas Akam, Dimitri M. Kullmann  Neuron 
Adaptation Disrupts Motion Integration in the Primate Dorsal Stream
Volume 19, Issue 3, Pages (April 2017)
Motor Learning with Unstable Neural Representations
Action Selection and Action Value in Frontal-Striatal Circuits
A Cooperative Switch Determines the Sign of Synaptic Plasticity in Distal Dendrites of Neocortical Pyramidal Neurons  Per Jesper Sjöström, Michael Häusser 
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Volume 78, Issue 5, Pages (June 2013)
Yann Zerlaut, Alain Destexhe  Neuron 
Patrick Kaifosh, Attila Losonczy  Neuron 
Cian O’Donnell, Terrence J. Sejnowski  Neuron 
Distinct Eligibility Traces for LTP and LTD in Cortical Synapses
Guilhem Ibos, David J. Freedman  Neuron 
Xiaomo Chen, Marc Zirnsak, Tirin Moore  Cell Reports 
Volume 96, Issue 1, Pages e7 (September 2017)
Scale-Change Symmetry in the Rules Governing Neural Systems
A Neural Signature of Hierarchical Reinforcement Learning
The Normalization Model of Attention
Adaptation without Plasticity
The Spike-Timing Dependence of Plasticity
Encoding of Stimulus Probability in Macaque Inferior Temporal Cortex
Volume 24, Issue 8, Pages e6 (August 2018)
The Brain as an Efficient and Robust Adaptive Learner
John B Reppas, W.Martin Usrey, R.Clay Reid  Neuron 
Spike timing-dependent plasticity
Volume 99, Issue 1, Pages e4 (July 2018)
Rapid Neocortical Dynamics: Cellular and Network Mechanisms
Gen Ohtsuki, Christian Hansel
Per Jesper Sjöström, Gina G Turrigiano, Sacha B Nelson  Neuron 
Shunting Inhibition Modulates Neuronal Gain during Synaptic Excitation
Barak Blumenfeld, Son Preminger, Dov Sagi, Misha Tsodyks  Neuron 
Patrick Kaifosh, Attila Losonczy  Neuron 
Volume 27, Issue 2, Pages (January 2017)
Presentation transcript:

Data and Power Efficient Intelligence with Neuromorphic Learning Machines  Emre O. Neftci  iScience  Volume 5, Pages 52-68 (July 2018) DOI: 10.1016/j.isci.2018.06.010 Copyright © 2018 The Author Terms and Conditions

iScience 2018 5, 52-68DOI: (10.1016/j.isci.2018.06.010) Copyright © 2018 The Author Terms and Conditions

Figure 1 Supervised Deep Learning in Spiking Neurons The event-driven Random Back-Propagation (eRBP) is an event-based synaptic plasticity learning rule for approximate BP in spiking neural networks. (Left) The network performing eRBP consists of feedforward layers (H1,…,HN) for prediction and feedback layers for supervised training with labels (targets) L. Full arrows indicate synaptic connections, thick full arrows indicate plastic synapses, and dashed arrows indicate synaptic plasticity modulation. In this example, digits 7,2,1,0,4 were presented in sequence to the network, after transformation into spike trains (layer D). Neurons in the network indicated by black circles were implemented as two-compartment spiking neurons. The first compartment follows the standard Integrate and Fire (I&F) dynamics, whereas the second integrates top-down errors and is used to multiplicatively modulate the learning. The error is the difference between labels (L) and predictions (P) and is implemented using a pair of neurons coding for positive error (blue) and negative error (red). Each hidden neuron receives inputs from a random combination of the pair of error neurons to implement random BP. Output neurons receive inputs from the pair of error neurons in a one-to-one fashion. (Middle) MNIST Classification error on the test set using a fully connected 784-100-10 network performed using limited precision states (8 bit fixed-point weights 16 bits state components) and on GPU (TensorFlow, floating-point 32 bits). (Right) Efficiency of learning expressed in terms of the number of operations necessary to reach a given accuracy is lower or equal in the spiking neural network (SynOps) compared with the artificial neural network (MACs). At this small MNIST task, the spiking neural network required about three times more neurons than the artificial neural network to achieve the same accuracy but the same number of respective operations. Figures adapted from Neftci et al. (2017) and Detorakis et al. (2017). iScience 2018 5, 52-68DOI: (10.1016/j.isci.2018.06.010) Copyright © 2018 The Author Terms and Conditions

Figure 2 Spike-Timing-Dependent Plasticity, Modulation, and Three-Factor Rule The classical spike-timing dependent plasticity (STDP) rule modifies the synaptic strengths of connected pre- and post-synaptic neurons based on the spike history in the following way: if a post-synaptic neuron generates action potential within a time interval after the pre-synaptic neuron has fired multiple spikes, then the synaptic strength between these two neurons becomes stronger (causal updates, long-term potentiation [LTP]). On the other hand, if the post-synaptic neuron fires multiple spikes before the pre-synaptic neuron generates action potentials within that time interval, then the synaptic strength becomes weak (acausal updated, long-term depression [LTD]) (Bi and Poo, 1998; Sjöström et al., 2008). The learning window in this context refers to how weights change as a function of the spike time difference (insets in the panels). Generalizations of STDP often involve custom neural windows and state-dependent modulation of the updates. The left plot shows an example of an online implementation of the classic STDP rule with nearest-neighbor interactions: ΔW(t)=spost(t)X(t)+spre(t)Y(t), where X(t) and Y(t) are traces representing presynaptic and postsynaptic spike history, respectively. Nearest neighbor interactions refer to the fact that the weight update depends on the previous pre- or post-spike, respectively. The right plot is a modulated STDP rule corresponding to a type of three-factor rule: ΔWM(t)=ΔW(t)⋅Modulation(t), where the three factors are pre-synaptic activity, post-synaptic activity, and modulation. For illustration purposes, here the modulation (green) is a random signal that multiplies the weight updates. In more practical examples, the modulation can represent reward (Florian, 2007) or classification error (Neftci et al., 2017). iScience 2018 5, 52-68DOI: (10.1016/j.isci.2018.06.010) Copyright © 2018 The Author Terms and Conditions

Figure 3 Learning to Recognize Spike Trains with Local Synaptic Plasticity Rules A spiking neuron trained with Equation 6 learns to fire at specific times (middle panel, vertical bars) of a Poisson input spike train stimulus (left panel) presented over 500 epochs. The middle panel shows the membrane potential of the neuron at the end of the learning (epoch 500). Weight updates were performed online, and the learning rate for each synapse was fixed. The right panel shows the van Rossum distance VRD during training. iScience 2018 5, 52-68DOI: (10.1016/j.isci.2018.06.010) Copyright © 2018 The Author Terms and Conditions

Figure 4 Learning with Neuron Models Matched to Memristor Update Rules MNIST classification task in a 784-100-10 network, using three different models: matched neuron model with memristor update rule (blue, Equation 8), sigmoidal neuron model with the memristor update rule (red, Equation 7), and standard artificial neural network with exact gradient BP as baseline (black). The sigmoidal neuron resulting from ignoring the non-linearity of the memristor performs poorly compared with the baseline. iScience 2018 5, 52-68DOI: (10.1016/j.isci.2018.06.010) Copyright © 2018 The Author Terms and Conditions

Figure 5 Example Computational Graph of a Two-Layer Network Implementing Direct Feedback Alignment Square nodes represent operations and circular nodes represent variables. Here, N and δN are nodes referring to the neuron and its derivative. Nodes VRD and X correspond to van Rossum distance and multiplication, respectively. Dashed edges communicate spikes, whereas full edges represent real values. iScience 2018 5, 52-68DOI: (10.1016/j.isci.2018.06.010) Copyright © 2018 The Author Terms and Conditions

iScience 2018 5, 52-68DOI: (10.1016/j.isci.2018.06.010) Copyright © 2018 The Author Terms and Conditions