Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano.

Slides:



Advertisements
Similar presentations
KULIAH II JST: BASIC CONCEPTS
Advertisements

Chapter 31 Cerebellum Copyright © 2014 Elsevier Inc. All rights reserved.
Cerebellar Spiking Engine: Towards Object Model Abstraction in Manipulation UGR with input from PAVIA and other partners  Motivation 1.Abstract corrective.
Lecture 15: Cerebellum The cerebellum consists of two hemispheres and a medial area called the vermis. The cerebellum is connected to other neural structures.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
Artificial Neural Networks - Introduction -
Cerebellar Spiking Engine: EDLUT simulator UGR with input from other partners.  Motivation 1. Simulation of biologically plausible spiking neural structures.
Artificial Neural Networks - Introduction -
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
Modeling The Spino- Neuromuscular System Terence Soule, Stanley Gotshall, Richard Wells, Mark DeSantis, Kathy Browder, Eric Wolbrecht.
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
Motor systems III: Cerebellum April 16, 2007 Mu-ming Poo Population coding in the motor cortex Overview and structure of cerebellum Microcircuitry of cerebellum.
Structure and function
Jacques Wadiche, PhD Assistant Professor Neurobiology Department 1/25/08 Cerebellum.
Artificial neural networks.
How facilitation influences an attractor model of decision making Larissa Albantakis.
Machine Learning. Learning agent Any other agent.
Getting on your Nerves. What a lot of nerve! There are about 100,000,000,000 neurons in an adult human. These form 10,000,000,000,000 synapses, or connections.
Bump attractors and the homogeneity assumption Kevin Rio NEUR April 2011.
A Shaft Sensorless Control for PMSM Using Direct Neural Network Adaptive Observer Authors: Guo Qingding Luo Ruifu Wang Limei IEEE IECON 22 nd International.
Artificial Neural Network Unsupervised Learning
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
Learning sensorimotor transformations Maurice J. Chacron.
Cerebellum Overview and structure of cerebellum Microcircuitry of cerebellum Motor learning.
Biomedical Sciences BI20B2 Sensory Systems Human Physiology - The basis of medicine Pocock & Richards,Chapter 8 Human Physiology - An integrated approach.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Neural dynamics of in vitro cortical networks reflects experienced temporal patterns Hope A Johnson, Anubhuthi Goel & Dean V Buonomano NATURE NEUROSCIENCE,
Sensory Physiology. Concepts To Understand Receptor Potential Amplitude Coding Frequency Coding Activation/Inactivation Neural Adaptation Synaptic Depression.
Summary of Lecture 3 VETS2011 Cerebellum Demo of VOR in owl: VOR plasticity World record cerebellum: Electric fish: nanosecond timing Summary of structure.
TEMPLATE DESIGN © In analyzing the trajectory as time passes, I find that: The trajectory is trying to follow the moving.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Networks with Short-Term Synaptic Dynamics (Leiden, May ) Misha Tsodyks, Weizmann Institute Mathematical Models of Short-Term Synaptic plasticity.
Neural Networks. Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks 1mm Areas /
Ch 9. Rhythms and Synchrony 9.7 Adaptive Cooperative Systems, Martin Beckerman, Summarized by M.-O. Heo Biointelligence Laboratory, Seoul National.
Alternating and Synchronous Rhythms in Reciprocally Inhibitory Model Neurons Xiao-Jing Wang, John Rinzel Neural computation (1992). 4: Ubong Ime.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
LECTURE 8: SYNAPTIC TRANSMISSION OVERVIEW AND NMJ REQUIRED READING: Kandel text, Chapters 10,11 In humans, neurons, each receiving average of
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Learning in Neural Networks
The Brain as an Efficient and Robust Adaptive Learner
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Learning Precisely Timed Spikes
Optimal Degrees of Synaptic Connectivity
Carlos D. Brody, J.J. Hopfield  Neuron 
John Widloski, Ila R. Fiete  Neuron 
Volume 40, Issue 6, Pages (December 2003)
Volume 36, Issue 5, Pages (December 2002)
Thomas Akam, Dimitri M. Kullmann  Neuron 
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
Moran Furman, Xiao-Jing Wang  Neuron 
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Yann Zerlaut, Alain Destexhe  Neuron 
Patrick Kaifosh, Attila Losonczy  Neuron 
Learning: A mechanism of learning found?
Michael Häusser, Beverley A Clark  Neuron 
Optimal Information Storage and the Distribution of Synaptic Weights
Volume 30, Issue 2, Pages (May 2001)
The Brain as an Efficient and Robust Adaptive Learner
Differential Effects of Excitatory and Inhibitory Plasticity on Synaptically Driven Neuronal Input-Output Functions  Tiago P. Carvalho, Dean V. Buonomano 
Rapid Neocortical Dynamics: Cellular and Network Mechanisms
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano

The cerebellum is important for initiating smooth, directed movements. Damage to the cerebellum causes severe movement deficits, including poor ability to time movements in response to external stimuli or in directed action. The speculation is that there must be a distinct biological mechanism within the cerebellum that encodes time differences between sensory inputs.

Other models of the biological timing mechanism depend on delays between units, varying time constants, or other imposed design choices. The benefits of Mauk’s neural network model: Temporal information about the stimulus is encoded in a subset of units activating the output unit. Emulating a conditioned stimulus/unconditioned stimulus response only takes one training phase to learn the temporal information before the testing phase Information for multiple different stimuli can be encoded. Multiple unconditioned stimuli over time can be coded for a single conditioned stimuli. Conditioned stimuli patterns can span multiple time steps and temporal information can still be retained.

Some basic cerebellum anatomy: Mauk uses theories of structure proposed by Marr and Albus in which: Climbing Fibers = outside input that contain error signals, modifies Purkinje cell synapses Mossy Fibers = provide sensory stimulus information to the granule and golgi cells Granule cells = encode the “context” in which movements take place Golgi cell = provides negative feedback to granule cells to stabilize cell activity Purkinje cell = provides the appropriate output, in this case a timed motor movement in response to a stimuli

Some basic cerebellum anatomy:

Golgi Cell

The model hypothesis: The structure of interactions between the granule cell layer and the golgi cell layer allows population subsets of the granule cell layer to represent physical and temporal information about the stimulus. In other words, a subset of granule cells will encode not only a pattern of activations that identify the unique input pattern (stimulus), but also how much time has elapsed from the onset of the input pattern. This is achieved by the mossy fiber “input” layer seeding the feedback loop between the granule cell layer and the golgi cell layer.

How does this work? 1.Input comes through the mossy fiber and activates a subset of granule cells. 2.These granule cells activate a subset of golgi cells on the golgi cell layer. 3.The activated golgi cells inhibit back to the granule cell layer in a negative feedback loop, but inhibit a different, overlapping subset of granule cells than were activated by the initial mossy fiber input. 4.This negative feedback loop between layers creates a “dynamic, nonperiodic population vector” of granule cell activity representing the stimulus pattern, even if the mossy fiber input is periodic. 5.Changing the weights on a particular granule cell subset that represents the correct time interval can represent that interval in purkinje cell activation levels.

Pyramidal regions depict the subset of granule or golgi cells which a cell on the other layer is able to contact. Within this subset of cells, the connections are uniformly distributed. White cells in the diagram depict post-synaptic cells that end up receiving input from the pre-synaptic cell in the other layer.

Specifics of the neural network: 10,000 granule cells 900 golgi cells 500 mossy fiber inputs 1 purkinje cell output (graded activation) A single granule cell receives excitatory input from 3 mossy fiber inputs and inhibitory input from 3 golgi cells A single golgi cell receives excitatory input from 100 granule cells and 20 mossy fiber inputs The purkinje cell receives input from all the granule cells

A clearer diagram: Granule Cell Layer Mossy Fiber Input Layer Golgi Cell Layer PC

Activation update mechanism: “Integrate and fire” cell types: Vi Go = the voltage of each golgi cell Thri Go = the threshold voltage for the golgi cell i to fire Gi Go:leak = current leak from golgi cell I Gi Go:MF = mossy fiber to golgi cell synaptic current Gi Go:Gr = granule cell to golgi cell synaptic current All inputs to the cell are summed together into a single current, which saturates at 1.0 and decays according to a set decay constant

Activation update mechanism cont.: This represents the synaptic current of a mossy fiber to golgi cell I with respect to time. Sn MF = representation of a spike in a mossy fiber Wgo MF = synaptic weights for the mossy fiber synapse at the golgi cell Synaptic currents emulate instantaneous rise in voltage and exponential decay in that voltage after spiking. Granule cells are controlled by similar equations, but they have additional inhibitory versions of the equations from the golgi cells.

Specifics of the neural network cont.: Initially all granule cells are connected to the single output Purkinje cell with the same weights When the first stimulus (conditioned stimulus emulation) is presented, the weights of the granule cells active within that window to the purkinje cell are decreased this simulates LTD produced by co-activation of the climbing fibers and parallel fibers to the purkinje cell The “voltage” of the purkinje cell is a weighted summed activity of all granule cells in the network with a time constant of 2.5 msec Mossy fiber activation patterns “seed” activation of different subsets of granule and golgi cells at each time step

Specifics of the neural network cont.: In training trials: at 200ms, unconditioned stimulus simulated by decreasing the strength of the all weights projecting from activated granule cells to the purkinje cell, like in the conditioned stimulus In testing trials: the unconditioned stimulus is not simulated, but because the pattern of activation of granule cells is the same as in the training period (same initial mossy fiber activation pattern seed) there is a decrease in activation at the same time interval The model is capable of learning timing for multiple different stimulus patterns, and stimulus patterns over a series of time- steps

Top line represents the % of mossy fibers active in each time bin. Initial increase in total mossy fiber activation represents the conditioned stimulus. The bottom line represents purkinje cell activity in the testing phase. After training an unconditioned stimulus at 200ms the granule cell subset during that time step have lower weights, dropping the purkinje cell voltage.

The model is very sensitive to noise. Variance in the mossy fibers used to signal the conditioned stimulus, variance in the pre-conditioned stimulus state of the model, and variance in constants of units such as threshold all have detrimental effects on the network’s ability to train the timing for the unconditioned stimulus. The network here is trained to respond to US at 125, 200, and 225ms. The injection of noise eliminates its ability to predict the US after enough has been injected into the model.

Additional weaknesses of the model: The noise can be decreased by decreasing the influence of the mossy fibers on the golgi cells, but this also decreases its ability to retain temporal information. The model can discriminate temporal information for conditioned stimulus and unconditioned stimulus simulations, where the timing is absolute, but there is no mechanism for learning relative timing between sensory information patterns. The rhythm of a song, for example, is learned regardless of the tempo that the song is played at.