Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning and Synaptic Plasticity. Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks.

Similar presentations


Presentation on theme: "Learning and Synaptic Plasticity. Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks."— Presentation transcript:

1 Learning and Synaptic Plasticity

2 Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks 1mm Areas / „Maps“ 1cm Sub-Systems 10cm CNS 1m

3 3 At the dendrite the incoming signals arrive (incoming currents) Molekules Synapses Neurons Local Nets Areas Systems CNS At the soma current are finally integrated. At the axon hillock action potential are generated if the potential crosses the membrane threshold The axon transmits (transports) the action potential to distant sites At the synapses are the outgoing signals transmitted onto the dendrites of the target neurons Structure of a Neuron:

4 Receptor ≈ Channel Vesicle Transmitter Axon Dendrite Schematic Diagram of a Synapse: Transmitter, Receptors, Vesicles, Channels, etc. synaptic weight:

5 5 Overview over different methods

6 6 Different Types/Classes of Learning  Unsupervised Learning (non-evaluative feedback) Trial and Error Learning. No Error Signal. No influence from a Teacher, Correlation evaluation only.  Reinforcement Learning (evaluative feedback) (Classic. & Instrumental) Conditioning, Reward-based Lng. “Good-Bad” Error Signals. Teacher defines what is good and what is bad.  Supervised Learning (evaluative error-signal feedback) Teaching, Coaching, Imitation Learning, Lng. from examples and more. Rigorous Error Signals. Direct influence from a teacher/teaching signal.

7 7 Basic Hebb-Rule: =  u i v  << 1 didi dt For Learning: One input, one output. An unsupervised learning rule: A supervised learning rule (Delta Rule): No input, No output, one Error Function Derivative, where the error function compares input- with output- examples. A reinforcement learning rule (TD-learning): One input, one output, one reward.

8 8 The influence of the type of learning on speed and autonomy of the learner Correlation based learning: No teacher Reinforcement learning, indirect influence Reinforcement learning, direct influence Supervised Learning, Teacher Programming Learning Speed Autonomy

9 9 Hebbian learning A B A B t When an axon of cell A excites cell B and repeatedly or persistently takes part in firing it, some growth processes or metabolic change takes place in one or both cells so that A‘s efficiency... is increased. Donald Hebb (1949)

10 10 Overview over different methods You are here !

11 11 Hebbian Learning …Basic Hebb-Rule: …correlates inputs with outputs by the… =  v u 1  << 1 dd dt v u1u1  Vector Notation Cell Activity: v = w. u This is a dot product, where w is a weight vector and u the input vector. Strictly we need to assume that weight changes are slow, otherwise this turns into a differential eq.

12 12 =  v u 1  << 1 dd dt Single Input =  v u  << 1 dwdw dt Many Inputs As v is a single output, it is scalar. Averaging Inputs =   << 1 dwdw dt We can just average over all input patterns and approximate the weight change by this. Remember, this assumes that weight changes are slow. If we replace v with w. u we can write: =  Q. w where Q = is the input correlation matrix dwdw dt Note: Hebb yields an instable (always growing) weight vector!

13 13 Synaptic plasticity evoked artificially Examples of Long term potentiation (LTP) and long term depression (LTD). LTP First demonstrated by Bliss and Lomo in 1973. Since then induced in many different ways, usually in slice. LTD, robustly shown by Dudek and Bear in 1992, in Hippocampal slice.

14 14

15 15

16 16 Why is this interesting?

17 LTP and Learning e.g. Morris Water Maze rat platform Morris et al., 1986 Control Blocked LTP Time per quadrant (sec) 1 2 3 4 23 4 1 321 4 Learn the position of the platform Before learning After learning

18 Receptor ≈ Channel Vesicle Transmitter Axon Dendrite Transmitter, Receptors, Vesicles, Channels, etc. synaptic weight: Schematic Diagram of a Synapse:

19 19 LTP will lead to new synaptic contacts

20 20

21 Synaptic Plasticity: Dudek and Bear, 1993 LTD (Long-Term Depression) LTP (Long-Term Potentiation) LTD LTP 10 Hz

22 22 Conventional LTP = Hebbian Learning Symmetrical Weight-change curve Pre t Pre Post t Post Synaptic change % Pre t Pre Post t Post The temporal order of input and output does not play any role

23 Spike timing dependent plasticity - STDP Markram et. al. 1997 +10 ms -10 ms

24 Synaptic Plasticity: STDP Makram et al., 1997 Bi and Poo, 2001 Synapse Neuron B Neuron A uv ω LTP LTD

25 25 Pre follows Post: Long-term Depression Pre t Pre Post t Post Synaptic change % Spike Timing Dependent Plasticity: Temporal Hebbian Learning Pre t Pre Post t Post Pre precedes Post: Long-term Potentiation Acausal Causal (possibly) Time difference T [ms]

26 26 =  v u 1  << 1 dd dt Single Input =  v u  << 1 dwdw dt Many Inputs As v is a single output, it is scalar. Averaging Inputs =   << 1 dwdw dt We can just average over all input patterns and approximate the weight change by this. Remember, this assumes that weight changes are slow. If we replace v with w. u we can write: =  Q. w where Q = is the input correlation matrix dwdw dt Note: Hebb yields an instable (always growing) weight vector! Back to the Math. We had:

27 =  (v -  ) u  << 1 dwdw dt Covariance Rule(s) Normally firing rates are only positive and plain Hebb would yield only LTP. Hence we introduce a threshold to also get LTD Output threshold =  v (u -   << 1 dwdw dt Input vector threshold Many times one sets the threshold as the average activity of some reference time period (training period)  = or  = together with v = w. u we get: =  C. w, where C is the covariance matrix of the input dwdw dt C = )(u- )> = - = )u> v <  : homosynaptic depression u <  : heterosynaptic depression

28 The covariance rule can produce LTD without (!) post-synaptic input. This is biologically unrealistic and the BCM rule (Bienenstock, Cooper, Munro) takes care of this. BCM- Rule =  vu (v -  )  << 1 dwdw dt ExperimentBCM-Rule v dw u post pre ≠ Dudek and Bear, 1992

29 The covariance rule can produce LTD without (!) post-synaptic input. This is biologically unrealistic and the BCM rule (Bienenstock, Cooper, Munro) takes care of this. BCM- Rule =  vu (v -  )  << 1 dwdw dt As such this rule is again unstable, but BCM introduces a sliding threshold = (v 2 -  ) < 1 dd dt Note the rate of threshold change should be faster than then weight changes (  ), but slower than the presentation of the individual input patterns. This way the weight growth will be over-dampened relative to the (weight – induced) activity increase.

30 open: control condition filled: light-deprived less input leads to shift of threshold to enable more LTP BCM is just one type of (implicit) weight normalization. Kirkwood et al., 1996

31 31 Evidence for weight normalization: Reduced weight increase as soon as weights are already big (Bi and Poo, 1998, J. Neurosci.) Problem: Hebbian Learning can lead to unlimited weight growth. Solution: Weight normalization a) subtractive (subtract the mean change of all weights from each individual weight). b) multiplicative (mult. each weight by a gradually decreasing factor).

32 32 Examples of Applications Kohonen (1984). Speech recognition - a map of phonemes in the Finish language Goodhill (1993) proposed a model for the development of retinotopy and ocular dominance, based on Kohonen Maps (SOM) Angeliol et al (1988) – travelling salesman problem (an optimization problem) Kohonen (1990) – learning vector quantization (pattern classification problem) Ritter & Kohonen (1989) – semantic maps OD ORI Program

33 33 Differential Hebbian Learning of Sequences Learning to act in response to sequences of sensor events

34 34 Overview over different methods You are here !

35 35 I. Pawlow History of the Concept of Temporally Asymmetrical Learning: Classical Conditioning

36 36

37 37 I. Pawlow History of the Concept of Temporally Asymmetrical Learning: Classical Conditioning Correlating two stimuli which are shifted with respect to each other in time. Pavlov’s Dog: “Bell comes earlier than Food” This requires to remember the stimuli in the system. Eligibility Trace: A synapse remains “eligible” for modification for some time after it was active (Hull 1938, then a still abstract concept).

38 38   0 = 1 11 Unconditioned Stimulus (Food) Conditioned Stimulus (Bell) Response  X   + Stimulus Trace E The first stimulus needs to be “remembered” in the system Classical Conditioning: Eligibility Traces

39 39 I. Pawlow History of the Concept of Temporally Asymmetrical Learning: Classical Conditioning Eligibility Traces Note: There are vastly different time-scales for (Pavlov’s) behavioural experiments: Typically up to 4 seconds as compared to STDP at neurons: Typically 40-60 milliseconds (max.)

40 40 Defining the Trace In general there are many ways to do this, but usually one chooses a trace that looks biologically realistic and allows for some analytical calculations, too. EPSP-like functions:  -function: Double exp.: This one is most easy to handle analytically and, thus, often used. Dampened Sine wave: Shows an oscillation. k kk

41 41 Overview over different methods Mathematical formulation of learning rules is similar but time-scales are much different.

42 42   Early: “Bell” Late: “Food” x Differential Hebb Learning Rule XiXi X0X0 Simpler Notation x = Input u = Traced Input V V’(t) uiui u0u0

43 43 Convolution used to define the traced input, Correlation used to calculate weight growth. uw

44 44 Produces asymmetric weight change curve (if the filters h produce unimodal „humps“) Derivative of the Output Filtered Input Output  T Differential Hebbian Learning

45 45 Conventional LTP Symmetrical Weight-change curve Pre t Pre Post t Post Synaptic change % Pre t Pre Post t Post The temporal order of input and output does not play any role

46 46 Produces asymmetric weight change curve (if the filters h produce unimodal „humps“) Derivative of the Output Filtered Input Output  T Differential Hebbian Learning

47 47 Weight-change curve (Bi&Poo, 2001) T=t Post - t Pre ms Pre follows Post: Long-term Depression Pre t Pre Post t Post Synaptic change % Pre t Pre Post t Post Pre precedes Post: Long-term Potentiation Spike-timing-dependent plasticity (STDP): Some vague shape similarity

48 48 Overview over different methods You are here !

49 49 Plastic Synapse NMDA/AMPA Postsynaptic: Source of Depolarization The biophysical equivalent of Hebb’s postulate Presynaptic Signal (Glu) Pre-Post Correlation, but why is this needed?

50 50 Plasticity is mainly mediated by so called N-methyl-D-Aspartate (NMDA) channels. These channels respond to Glutamate as their transmitter and they are voltage depended:

51 51 Biophysical Model: Structure x NMDA synapse v Hence NMDA-synapses (channels) do require a (hebbian) correlation between pre and post-synaptic activity! Source of depolarization: 1) Any other drive (AMPA or NMDA) 2) Back-propagating spike

52 52 Local Events at the Synapse  Local Current sources “under” the synapse: Synaptic current I synaptic  Global I BP Influence of a Back-propagating spike Currents from all parts of the dendritic tree I Dendritic u1u1 x1x1 v

53

54 54   Pre-syn. Spike BP- or D-Spike * V*h g NMDA On „Eligibility Traces“ Membrane potential: Weight Synaptic input Depolarization source

55 55 Dendritic compartment Plastic synapse with NMDA channels Source of Ca 2+ influx and coincidence detector Plastic Synapse NMDA/AMPA g BP spike Source of Depolarization Dendritic spike Source of depolarization: 1. Back-propagating spike 2. Local dendritic spike Model structure

56 56 Plasticity Rule (Differential Hebb) NMDA synapse - Plastic synapse NMDA/AMPA g Source of depolarization Instantenous weight change: Presynaptic influence Glutamate effect on NMDA channels Postsynaptic influence

57 57 Normalized NMDA conductance: NMDA channels are instrumental for LTP and LTD induction (Malenka and Nicoll, 1999; Dudek and Bear,1992) Pre-synaptic influence NMDA synapse - Plastic synapse NMDA/AMPA g Source of depolarization

58 58 Dendritic spikes Back- propagating spikes (Larkum et al., 2001 Golding et al, 2002 Häusser and Mel, 2003) (Stuart et al., 1997) Depolarizing potentials in the dendritic tree

59 59 NMDA synapse - Plastic synapse NMDA/AMPA g Source of depolarization Postsyn. Influence For F we use a low-pass filtered („slow“) version of a back-propagating or a dendritic spike.

60 60 BP and D-Spikes

61 61 Back-propagating spike Weight change curve T NMDAr activation Back-propagating spike T=t Post – t Pre Weight Change Curves Source of Depolarization: Back-Propagating Spikes

62 62 Plastic Synapse NMDA/AMPA Postsynaptic: Source of Depolarization The biophysical equivalent of Hebb’s PRE-POST CORRELATION postulate: THINGS TO REMEMBER Presynaptic Signal (Glu) Possible sources are: BP-Spike Dendritic Spike Local Depolarization Slow-Acting NMDA Signal as presynatic influence

63 63 One word about Supervised Learning

64 64 Overview over different methods – Supervised Learning You are here ! And many more

65 65 Supervised learning methods are mostly non-neuronal and will therefore not be discussed here.

66 66 So Far: Open Loop Learning All slides so far !

67 67 CLOSED LOOP LEARNING Learning to Act (to produce appropriate behavior) Instrumental (Operant) Conditioning All slides to come now !

68 68 This is an open-loop system Sensor 2 conditioned Input BellFood Salivation Pavlov, 1927 Temporal Sequence This is an Open Loop System !

69 69 Adaptable Neuron Env. Closed loop Sensing Behaving


Download ppt "Learning and Synaptic Plasticity. Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks."

Similar presentations


Ads by Google