The Brain as an Efficient and Robust Adaptive Learner

Slides:



Advertisements
Similar presentations
Volume 92, Issue 5, Pages (December 2016)
Advertisements

Collins Assisi, Mark Stopfer, Maxim Bazhenov  Neuron 
Predicting Every Spike
The Brain as an Efficient and Robust Adaptive Learner
Computational models of epilepsy
Volume 81, Issue 4, Pages (February 2014)
Volume 79, Issue 5, Pages (September 2013)
Volume 82, Issue 1, Pages (April 2014)
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Chenguang Zheng, Kevin Wood Bieri, Yi-Tse Hsiao, Laura Lee Colgin 
Enhancement of Spike-Timing Precision by Autaptic Transmission in Neocortical Inhibitory Interneurons  Alberto Bacci, John R. Huguenard  Neuron  Volume.
The Generation of Direction Selectivity in the Auditory System
Learning Precisely Timed Spikes
Spatiotemporal Response Properties of Optic-Flow Processing Neurons
Bassam V. Atallah, Massimo Scanziani  Neuron 
John Widloski, Ila R. Fiete  Neuron 
Preceding Inhibition Silences Layer 6 Neurons in Auditory Cortex
Volume 81, Issue 6, Pages (March 2014)
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Volume 40, Issue 6, Pages (December 2003)
Ben Scholl, Xiang Gao, Michael Wehr  Neuron 
Hugh Pastoll, Lukas Solanka, Mark C.W. van Rossum, Matthew F. Nolan 
Volume 86, Issue 1, Pages (April 2015)
Michiel W.H. Remme, Máté Lengyel, Boris S. Gutkin  Neuron 
Synaptic Correlates of Working Memory Capacity
Confidence as Bayesian Probability: From Neural Origins to Behavior
Responses of Collicular Fixation Neurons to Gaze Shift Perturbations in Head- Unrestrained Monkey Reveal Gaze Feedback Control  Woo Young Choi, Daniel.
Volume 66, Issue 4, Pages (May 2010)
Volume 79, Issue 2, Pages (July 2013)
Shunting Inhibition Improves Robustness of Gamma Oscillations in Hippocampal Interneuron Networks by Homogenizing Firing Rates  Imre Vida, Marlene Bartos,
Volume 89, Issue 5, Pages (March 2016)
How Inhibition Shapes Cortical Activity
Adaptation without Plasticity
Sacha B. Nelson, Vera Valakh  Neuron 
Nicholas J. Priebe, David Ferster  Neuron 
Thomas Akam, Dimitri M. Kullmann  Neuron 
Volume 19, Issue 3, Pages (April 2017)
Guillaume Hennequin, Tim P. Vogels, Wulfram Gerstner  Neuron 
Franco Pestilli, Marisa Carrasco, David J. Heeger, Justin L. Gardner 
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
Moran Furman, Xiao-Jing Wang  Neuron 
Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations  Anders Lansner  Trends in Neurosciences  Volume.
Collins Assisi, Mark Stopfer, Maxim Bazhenov  Neuron 
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Volume 78, Issue 5, Pages (June 2013)
Volume 95, Issue 5, Pages e5 (August 2017)
Yann Zerlaut, Alain Destexhe  Neuron 
Patrick Kaifosh, Attila Losonczy  Neuron 
Ryan G. Natan, Winnie Rao, Maria N. Geffen  Cell Reports 
Volume 96, Issue 1, Pages e7 (September 2017)
Learning by the Dendritic Prediction of Somatic Spiking
Franco Pestilli, Marisa Carrasco, David J. Heeger, Justin L. Gardner 
Timescales of Inference in Visual Adaptation
Adaptation without Plasticity
Perfect and Near-Perfect Adaptation in Cell Signaling
The Spike-Timing Dependence of Plasticity
Volume 91, Issue 3, Pages (August 2016)
Volume 58, Issue 1, Pages (April 2008)
Synaptic Mechanisms of Forward Suppression in Rat Auditory Cortex
Volume 74, Issue 1, Pages (April 2012)
Differential Effects of Excitatory and Inhibitory Plasticity on Synaptically Driven Neuronal Input-Output Functions  Tiago P. Carvalho, Dean V. Buonomano 
Feedforward to the Past: The Relation between Neuronal Connectivity, Amplification, and Short-Term Memory  Surya Ganguli, Peter Latham  Neuron  Volume.
Recurrent neuronal circuits in the neocortex
Rapid Neocortical Dynamics: Cellular and Network Mechanisms
Volume 95, Issue 5, Pages e4 (August 2017)
Richard A. Andersen, He Cui  Neuron 
Visual Perception: One World from Two Eyes
Maxwell H. Turner, Fred Rieke  Neuron 
Barak Blumenfeld, Son Preminger, Dov Sagi, Misha Tsodyks  Neuron 
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

The Brain as an Efficient and Robust Adaptive Learner Sophie Denève, Alireza Alemi, Ralph Bourdoukan  Neuron  Volume 94, Issue 5, Pages 969-977 (June 2017) DOI: 10.1016/j.neuron.2017.05.016 Copyright © 2017 Elsevier Inc. Terms and Conditions

Figure 1 Autoencoder Network and Learning of a Tight E-I Balance (A) Structure of the recurrent network. Interconnected leaky integrate and fire neurons receive a time-varying input s(t) with feedforward weights F. Output spikes (vertical bars) are multiplied by the decoding weights D, providing an instantaneous estimate of the input, sˆ(t). (B) In this unfolded version of the autoencoder network, the decoded estimate is subtracted from the input. (C) Recurrent network equivalent to the unfolded network in (B). (D) Equivalent network of pyramidal cells and interneurons. Interneurons inhibit the pyramidal cells via a fast inhibitory connection with weight Wf. (E) Learning rule for training fast connections: the synaptic weight is adjusted to exactly reset the postsynaptic membrane potential (in red) at the time of the presynaptic spike (black). It is in fact a Hebbian-inhibitory learning rule with the weight changed according to the product of presynaptic spikes and postsynaptic membrane potential (Brendel et al., 2017; Bourdoukan et al., 2012). (F) The reconstruction sˆ (in blue) of the input signal s (in red) is much more precise after learning. Raster plots show the spiking activity in the network before (left panel) and after learning (right panel). Top and middle panels are adapted from Bourdoukan and Denève (2015). In the bottom (adapted from Brendel et al., 2017), we schematized how excitation dominates in the network before learning, resulting in regular spike trains, while inhibitory current closely tracks excitatory current after learning (with a short time lag; see inset). Excitatory (red), inhibitory (blue), and total currents are shown above; membrane potentials shown below. Neuron 2017 94, 969-977DOI: (10.1016/j.neuron.2017.05.016) Copyright © 2017 Elsevier Inc. Terms and Conditions

Figure 2 Dynamic Autoencoder Implementing a Dynamical System Efficiently (A) Alternative interpretation of the unfolded autoencoder in Figure 1B. (B) Unfolded network with the addition of a predictive loop approximating the state dynamics (in magenta). Black bars represent the spikes in the fast corrective loop. Black continuous lines represent the filtered spike train in the slower predictive loop. (C) Folded network where the fast corrective loops are replaced by fast connections (blue) resulting in fast (exponential kernel) postsynaptic potential (PSP). The slow predictive loop is replaced by another set of connections (magenta) with slower PSP (exponential kernel convolved with itself). (D) Schematic representation of an unfolded network in NEF (Eliasmith and Anderson, 2004). Neuron 2017 94, 969-977DOI: (10.1016/j.neuron.2017.05.016) Copyright © 2017 Elsevier Inc. Terms and Conditions

Figure 3 Learning the Slow Connections (A) Unfolded network with the addition of the error feedback (in green). The desired trajectory is provided as a teaching signal and subtracted from the network estimate before being fed back into the input. (B) Equivalent folded network. (C) Schema representing the learning of the slow connections. The change in connection weights is proportional to the product between the presynaptic firing rate and the postsynaptic error feedback. Thus, when a presynaptic input is correlated with the error feedback, its weight (w) is increased. When a presynaptic input is anti-correlated with the error feedback, its weight (w) is decreased. Eventually, the error feedback is cancelled and does not contribute anymore to the learning or neural activity. (D) The output of the network (red) and the desired output (blue), before (left) and after (right) shown for learning a damped harmonic oscillator. The black solid line on the top shows the impulse command that drives the network. At the bottom, the spike trains of all network neurons are shown as a raster plot. (E) Comparison between the learned weights and their predicted weights based on the optimal closed form solution for the damped harmonic oscillator. (D) and (E) are taken from Bourdoukan and Denève (2015). Neuron 2017 94, 969-977DOI: (10.1016/j.neuron.2017.05.016) Copyright © 2017 Elsevier Inc. Terms and Conditions

Figure 4 Robustness of the Predictive Autoencoder (A) A toy example of two neurons reconstructing a continuous input signal (black, filtered input signal; blue, filtered spike train). Bottom panels show the membrane potential and spike trains of the two neurons (adapted from Barrett et al., 2015). (B) Robustness of a network of 200 neurons implementing a perfect integrator (adapted from Boerlin et al., 2013). Top panel shows the input signal, dots the spikes, blue and red lines the desired and estimated state, and colored boxes time periods during which a quarter of all neurons were inactivated. (C) Network performance (100 neurons) before (left panel) and after learning (two middle panels) compared to a population of neurons with independent Poisson noise (right panel). All-to-all connected network with instantaneous synapses “after” is compared to a “realistic” network with 50% of the connections missing, AMPA-like synaptic dynamics, and a large amount of noise (for details, see Brendel et al., 2017). (D) Schema illustrating the interplay and fast and slow synapse to ensure the network robustness and efficiency. Left panel: during a time interval Δ, the represented state moves from xˆ(t) to xˆ(t+Δ) thanks to the slow connection. Neural activity moves between two lines defined as Dr(t)=xˆ(t) and Dr(t+Δ)=xˆ(t+Δ). For illustration purposes, the network has only two neurons with D=[1,1], albeit robustness typically requires more neurons. The elongated blob represents the objective function minimized by the network (reconstruction error + cost of neural activity, lighter colors corresponding to a more efficient code). Middle panel: result of a sudden perturbation of neural activity (orange arrow). With no fast connections, the perturbation propagates to the future state estimates. With fast connections (right panel), the activity is brought back toward the optima so quickly that the slow dynamics (and thus the state estimate) remain unperturbed. Neuron 2017 94, 969-977DOI: (10.1016/j.neuron.2017.05.016) Copyright © 2017 Elsevier Inc. Terms and Conditions