Network of Neurons Computational Neuroscience 03 Lecture 6.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Example Project and Numerical Integration Computational Neuroscience 03 Lecture 11.
Computational Neuroscience 03 Lecture 8
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Kostas Kontogiannis E&CE
Artificial Neural Networks - Introduction -
Why are cortical spike trains irregular? How Arun P Sripati & Kenneth O Johnson Johns Hopkins University.
Marseille, Jan 2010 Alfonso Renart (Rutgers) Jaime de la Rocha (NYU, Rutgers) Peter Bartho (Rutgers) Liad Hollender (Rutgers) Néstor Parga (UA Madrid)
Artificial Neural Networks - Introduction -
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
1Neural Networks B 2009 Neural Networks B Lecture 1 Wolfgang Maass
Machine Learning Neural Networks
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Basic Models in Neuroscience Oren Shriki 2010 Associative Memory 1.
Levels in Computational Neuroscience Reasonably good understanding (for our purposes!) Poor understanding Poorer understanding Very poorer understanding.
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
The three main phases of neural development 1. Genesis of neurons (and migration). 2. Outgrowth of axons and dendrites, and synaptogenesis. 3. Refinement.
A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch
Artificial Neurons, Neural Networks and Architectures
Ch 7.5: Homogeneous Linear Systems with Constant Coefficients
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
How facilitation influences an attractor model of decision making Larissa Albantakis.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Itti: CS564 - Brain Theory and Artificial Intelligence. Systems Concepts 1 CS564 - Brain Theory and Artificial Intelligence University of Southern California.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial Basis Function (RBF) Networks
Biological Modeling of Neural Networks: Week 15 – Population Dynamics: The Integral –Equation Approach Wulfram Gerstner EPFL, Lausanne, Switzerland 15.1.
Unsupervised learning
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Another viewpoint: V1 cells are spatial frequency filters
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
Lecture 11: Networks II: conductance-based synapses, visual cortical hypercolumn model References: Hertz, Lerchner, Ahmadi, q-bio.NC/ [Erice lectures]
Module 1: Statistical Issues in Micro simulation Paul Sousa.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
What to make of: distributed representations summation of inputs Hebbian plasticity ? Competitive nets Pattern associators Autoassociators.
Biomedical Sciences BI20B2 Sensory Systems Human Physiology - The basis of medicine Pocock & Richards,Chapter 8 Human Physiology - An integrated approach.
Lecture 9: Introduction to Neural Networks Refs: Dayan & Abbott, Ch 7 (Gerstner and Kistler, Chs 6, 7) D Amit & N Brunel, Cerebral Cortex 7, (1997)
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 19. Systems Concepts 1 Michael Arbib: CS564 - Brain Theory and.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Oscillatory Models of Hippocampal Activity and Memory Roman Borisyuk University of Plymouth, UK In collaboration with.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
1 Computational Vision CSCI 363, Fall 2012 Lecture 16 Stereopsis.
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye University of Illinois at Urbana-Champaign 1 Lecture 23: Small Signal.
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
Model Reduction Techniques in Neuronal Simulation Richard Hall, Jay Raol and Steven J. Cox Model Reduction Techniques in Neuronal Simulation Richard Hall,
The Brain as an Efficient and Robust Adaptive Learner
Neuro-RAM Unit in Spiking Neural Networks with Applications
Learning Precisely Timed Spikes
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Volume 40, Issue 6, Pages (December 2003)
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
The Brain as an Efficient and Robust Adaptive Learner
Rapid Neocortical Dynamics: Cellular and Network Mechanisms
Presentation transcript:

Network of Neurons Computational Neuroscience 03 Lecture 6

Connecting neurons in networks Last week showed how to model synapses in HH models and integrate and fire models: Can add them together to form networks of neurons

Use cable theory: R L = r L  x/(  a 2 ) And multicompartmental modelling to model propagation of signals between neurons

However, this soon leads to very complex models and very computationally intensive Massive amounts of numerical integration is needed (can lead to accumulation of truncation errors Need to model neuronsl dynamics on the milisecond scale while netpwrk dynamics can be several orders of magnitude longer Need to make a simplification …

Firing Rate Models Since the rate of spiking indicates synaptic activity, use the firing rate as the information in the network However AP’s are all-or-nothing and spike timing is stochastic With identical input for the identical neuron spike patterns are similar, but not identical

 Single spiking time is meaningless  To extract useful information, we have to average to obtain the firing rate r for a group of neurons in a local circuit where neuron codes the same information over a time window Local circuit = Time window = 1 sec r =  Hz

So we can have a network of these local groups w 1: synaptic strength wnwn r1r1 rnrn Hence we have firing rate of a group of neurons

Much simpler modelling eg don’t need milisecond time scales Can do analytic calculations of some aspects of network dynamics Spike models have many free parameters – can be difficult to set (cf Steve Dunn) Since AP model responds deterministically to injected current, spike sequences can only be predicted accurately if all inputs are known. This is unlikely Although cortical neurons have many connections, probability of 2 randomly chosen neurons being connected is low. Either need many neurons to replicate network connectivity or need to average over a more densely connected group. How to average spikes? Typically an ‘average’ spike => all neurons in unit spike synchronously => large scale synchronisation unseen in (healthy) brain Advantages

Can’t deal with issues of spike timing or spike correlations Restricted to cases where neuronal firing is uncorrelated with little synchronous firing (eg where presynaptic inputs to a large fraction of neurons is correlated) + where precise patterns of spike timing unimportant If so, models produce similar results. However, both styles are clearly needed Disadvantages

1. work out how total synaptic input depends on firing rates of presynaptic afferents 2. Model how firing rate of postsynaptic neuron depends on this input Generally determine 1 by injecting current into soma of neurons and measuring responses. Therefore, define total synaptic input to be total current in soma due to presynaptic AP’s, denoted by I s Then work out postsynaptic rate v from I S using: v = F(I S ) F is the activation function. Sometimes use the sigmoid (useful if derivatives are needed in analysis). Often use threshold linear function F=[I S – t] + (linear but I S = 0 for I S < t. For t =0 known as half-wave rectification The model

Although I s determined by injection of constant current, can assume that the same response is true when I s is time dependent ie v = F(I S (t)) Thus dynamics come from synaptic input. This is presynaptic input which is effectively filtered by dynamics of current propagation from synapse to soma. Therefore use: Firing rate models with current dynamics Time constant  s If electrotonically compact, roughly same as decay of synaptic conductance, but typically low (milliseconds)

Visualise effect of  s as follows. Imagine I starts at some value I 0 and we have sliced time into discrete pieces  t. At n’th time step have: I(n  t) = I n = I n-1 +  t dI/dt Imagining w.r =0 have: Effect of  s Exponential decay

Alternatively, if w.r not 0 Ie it retains some memory of activity at previous time-step (which itself retained some memory of time step before etc etc).Sort of a time average How much is retained or for how long we average depends on  s as it governs how quick things change. If its 0 none retained if large lot retained

 s = 1  s = 4  s = 0.1 Delays the response to the input Also dependent on starting position

 s = 0.1 Filters input based on size of time constant

 s = 1 Filters input based on size of time constant

 s = 4 Filters input based on size of time constant

Alternatively, since postsynaptic rate is caused by changes in membrane potential, can add in effects membrane capacitance/resistance. This also effectively acts as a low pass filter giving: If  r <<  s then v = F(I S (t)) pretty quickly so 2 nd model reduces to first. Alternatively if  s <<  r (more usual) we get: Cf leaky integrator, continuous time recurrent nets

Models with only one set of dynamics work well for above threshold inputs as low pass thresholding irrelevant, but when signal is below threshold for a while these dynamics become important and both levels are needed

For a network replace weight vector by a matrix. Also often replace feedforward input with a vector Feedforward and Recurrent networks Dale’s law states that a neuron can’t both inhibit and excite neurons so wieghts in each row of matrices must have the same sign ie M aa’ (weight from a’ to a) must be +ve or –ve for all a

This means that except for special cases M cannot be symmetric since if a’ inhibits a, unless a also inhibits a’ then M aa’ has a different sign to M a’a However, anlaysis of systems is much easier when a is symmetric. Corresponds to making inhibitory dynamics instantaneous. These systems are studied for their analytical properties but systems where excitatory-inhibitory networks have much richer dynamics exhibiting eg oscillatory behaviour

Often identify each neuron in a network by a parameter describing an aspect of its selectivity. Eg for neurons in the primary visual cortex can use their preferred spatial phase (ie what angle of line they respond most to) Then look at firing rates as a function of this parameter: v(  r  In large networks there will be a large range of parameters. Assume that the density of each is uniform and equal to p and coverage is dense. Replace the weight matrices by functions W(  ’) and M(  ’) which describe the weights from a presynaptic neuron with preferred angle  ’ to a postsynaptic neuron with preferred angle  we get: Continuous model

Pure feedforward nets can do many things and eg can be shown to be able to perform coordinate transformations (habd to body for reaching) To do this they must exhibit gaze dependent gain modulation: peak firing rate not shifted by a change in gaze location but increased

Recurrent networks can also do this but have much more complex dynamics than feedforward nets. Also more difficult to analyse Much analysis focuses on looking at the eigenvectors of the matrix M Can show for instance that networks can exhibit selective amplification if there is one dominant eigenvector (cf PCA)

Or if an eigenvalue is exactly equal to 1 and others < 1can get integration of inputs and therefore persistent activity as activity does not stop when input stops While synaptic modification rules can be used to establish such precies tuning it is not clear how this is done in biological systems

Also can see that recurrent networks exhibit stereotypical patterns of activity largely determined by recurrent interactions and can be independent of feedforwrd input and thus can get sustained activity Input Output Therfore recurrent connections can act as a form of memory

Such memory is called working or short term memory (seconds to hours) To establish long term memories idea is that memory is encoded in the synaptic weights. Weights are set when memory is stored. When a similar (or incomplete) feedforward input arrives to the one that created the memory, persistent activity signals memory recall Associative memory: recurrent weights are set so that network has several fixed points which are identical to the patterns of activity representing the stored memories. Each fixed point has a basin of attraction representing the set of inputs which will result in the net ending up at that fixed point. When presented with an input network effectively pattern matches input to stored patterns Can thus examine capacity of networks to remember patterns by analysing stability properties of matrix encoded by synaptic weights

Interplay of excitatory and inhibitory connections can be shown to give rise to oscillations in networks Network analysis now problematic so use homogenous excitatory and inhibitory populations of neurons (effectively 2 neuron-groups) and examine a phase plane anlalysis. Can show that non-linearity of activation function allows for stable limit cycles Can also look at stochastic networks where input current is interpreted as a probability of firing: Boltzmann machines. Now need statistical analysis of network properties