Download presentation
Presentation is loading. Please wait.
Published byMatilda Chandler Modified over 9 years ago
1
TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan TA:Mandana Ahmadi, mandana@gatsby.ucl.ac.uk Website: http://www.gatsby.ucl.ac.uk/~mandana/TNI/TNI.htm (slides will be on website) Lectures:Tuesday/Friday, 11:00-1:00. Review:Friday, 1:00-3:00. Homework:Assigned Friday, due Friday (1 week later). first homework: assigned Oct. 3, due Oct. 10.
2
What is computational neuroscience? Our goal: figure out how the brain works.
3
10 microns There are about 10 billion cubes of this size in your brain!
4
How do we go about making sense of this mess? David Marr (1945-1980) proposed three levels of analysis: 1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)
5
Example #1: memory. the problem: recall events, typically based on partial information.
6
Example #1: memory. the problem: recall events, typically based on partial information. associative or content-addressable memory. an algorithm: dynamical systems with fixed points. r3r3 r2r2 r1r1 activity space
7
Example #1: memory. the problem: recall events, typically based on partial information. associative or content-addressable memory. an algorithm: dynamical systems with fixed points. neural implementation: Hopfield networks. x i = sign(∑ j J ij x j )
8
Example #2: vision. the problem (Marr): 2-D image on retina → 3-D reconstruction of a visual scene.
9
Example #2: vision. the problem (modern version): 2-D image on retina → recover the latent variables. house sun tree bad artist
10
Example #2: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. an algorithm: graphical models. x1x1 x2x2 x3x3 r1r1 r2r2 r3r3 r4r4 latent variables low level representation
11
Example #2: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. an algorithm: graphical models. x1x1 x2x2 x3x3 r1r1 r2r2 r3r3 r4r4 latent variables low level representation inference
12
Example #2: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. an algorithm: graphical models. implementation in networks of neurons: no clue.
13
Comment #1: the problem: the algorithm: neural implementation:
14
Comment #1: the problem:easier the algorithm:harder neural implementation:harder often ignored!!!
15
Comment #1: the problem:easier the algorithm:harder neural implementation:harder A common approach: Experimental observation → model Usually very underconstrained!!!!
16
Comment #1: the problem:easier the algorithm:harder neural implementation:harder Example i: CPGs (central pattern generators) rate Too easy!!!
17
Comment #1: the problem:easier the algorithm:harder neural implementation:harder Example ii: single cell modeling C dV/dt = -g L (V – V L ) – n 4 (V – V Na ) … dn/dt = … … lots and lots of parameters … which ones should you use?
18
Comment #1: the problem:easier the algorithm:harder neural implementation:harder Example iii: network modeling lots and lots of parameters × thousands
19
Comment #2: the problem:easier the algorithm:harder neural implementation:harder You need to know a lot of math!!!!! r3r3 r2r2 r1r1 activity space x1x1 x2x2 x3x3 r1r1 r2r2 r3r3 r4r4
20
Comment #3: the problem:easier the algorithm:harder neural implementation:harder This is a good goal, but it’s hard to do in practice. Our actual bread and butter: 1. Explaining observations (mathematically) 2. Using sophisticated analysis to design simple experiments that test hypotheses.
21
voltage 100 ms time -50 mV +40 mV dendrites soma axon 1 ms A classic example: Hodgkin and Huxley.
22
C dV/dt = –g L (V-V L ) – g Na m 3 h(V-V Na ) – … dm/dt = … …
23
Comment #4: the problem:easier the algorithm:harder neural implementation:harder some algorithms are easy to implement on a computer but hard in a brain, and vice-versa. we should be looking for the vice-versa ones. it can be hard to tell which is which. these are linked!!!
24
Basic facts about the brain
25
Your brain
26
Your cortex unfolded ~30 cm ~0.5 cm neocortex (cognition) subcortical structures (emotions, reward, homeostasis, much much more) 6 layers
27
Your cortex unfolded 1 cubic millimeter, ~3*10 -5 oz
28
1 mm 3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons
29
1 mm 3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 mm 2 of a CPU: 1 million transistors 2 connections/transistor (=> 2 million connections).002 km of wire
30
1 mm 3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons whole brain (2 kg): 10 11 neurons 10 15 connections 8 million km of axons 1 mm 2 of a CPU: 1 million transistors 2 connections/transistor (=> 2 million connections).002 km of wire whole CPU: 10 9 transistors 2*10 9 connections 2 km of wire
31
1 mm 3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons whole brain (2 kg): 10 11 neurons 10 15 connections 8 million km of axons 1 mm 2 of a CPU: 1 million transistors 2 connections/transistor (=> 2 million connections).002 km of wire whole CPU: 10 9 transistors 2*10 9 connections 2 km of wire
32
voltage 100 ms time -50 mV +40 mV dendrites (input) soma (spike generation) axon (output) 1 ms
34
current flow synapse
35
current flow synapse
36
voltage 100 ms time -50 mV +40 mV
37
neuron jneuron i neuron j emits a spike: V on neuron i t 10 ms EPSP
38
neuron jneuron i neuron j emits a spike: V on neuron i t 10 ms IPSP
39
neuron jneuron i neuron j emits a spike: V on neuron i t 10 ms IPSP amplitude = w ij
40
neuron jneuron i neuron j emits a spike: V on neuron i t 10 ms IPSP amplitude = w ij changes with learning
41
current flow synapse
42
A bigger picture view of the brain
43
x r sensory processing motor processing x'x' r'r' cognition memory action selection peripheral spikes latent variables motor actions peripheral spikes brain r ^ “direct” code for latent variables r'r' ^ “direct” code for motor actions
44
r
45
r
46
r
47
r
48
r you are the cutest stick figure ever!
49
r you are the cutest stick figure ever!
50
x r sensory processing motor processing x'x' r'r' cognition memory action selection peripheral spikes latent variables motor actions peripheral spikes brain r ^ “direct” code for latent variables r'r' ^ “direct” code for motor actions
51
x r sensory processing motor processing x'x' r'r' cognition memory action selection peripheral spikes latent variables motor actions peripheral spikes brain r ^ “direct” code for latent variables r'r' ^ “direct” code for motor actions
52
In some sense, action selection is the most important problem: if we don’t choose the right actions, we don’t reproduce, and all the neural coding and computation in the world isn’t going to help us.
53
Do I call him and risk rejection and humiliation, or do I play it safe, and stay home on Saturday night and eat oreos?
54
Do I call her and risk rejection and humiliation, or do I play it safe, and stay home on Saturday night and eat oreos?
55
x r sensory processing motor processing x'x' r'r' cognition memory action selection peripheral spikes latent variables motor actions peripheral spikes brain r ^ “direct” code for latent variables r'r' ^ “direct” code for motor actions
56
Problems: 1.How does the brain extract latent variables? 2.How does it manipulate latent variables? 3.How does it learn to do both? Ask at two levels: 1.What are the algorithms? 2.How are they implemented in neural hardware?
57
What do we know about the brain? Highly biased
58
w ij a. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which. We don’t know the wiring diagram at the microscopic level.
59
b.Single neurons. We know very well how point neurons work (think Hodgkin Huxley). Dendrites. Lots of potential for incredibly complex processing. My guess: they make neurons bigger and reduce wiring length (see the work of Mitya Chklovskii). How much I would bet: 20 p.
60
c.The neural code. My guess: once you get away from periphery, it’s mainly firing rate: an inhomogeneous Poisson process with a refractory period is a good model of spike trains. How much I would bet: £100. The role of correlations. Still unknown. My guess: don’t have one.
61
d.Recurrent networks of spiking neurons. This is a field that is advancing rapidly! There were two absolutely seminal papers about a decade ago: van Vreeswijk and Sompolinsky (Science, 1996) van Vreeswijk and Sompolinsky (Neural Comp., 1998) We now understand very well randomly connected networks (harder than you might think), and (I believe) we are on the verge of: i) understanding networks that have interesting computational properties. ii) computing the correlational structure in those networks.
62
e.Learning. We know a lot of facts (LTP, LTD, STDP). it’s not clear which, if any, are relevant. the relationship between learning rules and computation is essentially unknown. Theorists are starting to develop unsupervised learning algorithms, mainly ones that maximize mutual information. These are promising, but the link to the brain has not been fully established.
63
e.Learning. We know a lot of facts (LTP, LTD, STDP). it’s not clear which, if any, are relevant. the relationship between learning rules and computation is essentially unknown. Theorists are starting to develop unsupervised learning algorithms, mainly ones that maximize mutual information. These are promising, but the link to the brain has not been fully established.
64
What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space. space of possible pictures is much smaller, and forms a very complicated manifold: pixel 1 pixel 2 possible visual scenes
65
What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space. space of possible pictures is much smaller, and forms a very complicated manifold: pixel 1 pixel 2 visual scenes
66
What is unsupervised learning? Learning structure from data without any help from anybody. Example: most visual scenes are very unlikely to occur. 1000 × 1000 pixels => million dimensional space. space of possible pictures is much smaller, and forms a very complicated manifold: pixel 1 pixel 2 visual scenes
67
What is unsupervised learning? Learning from spikes: neurons 1 neuron 2
68
What is unsupervised learning? Learning from spikes: neurons 1 neuron 2 dog cat
69
A word about learning (remember these numbers!!!): You have about 10 15 synapses. If it takes 1 bit of information to set a synapse, you need 10 15 bits to set all of them. 30 years ≈ 10 9 seconds. To set 1/10 of your synapses in 30 years, you must absorb 100,000 bits/second. Learning in the brain is almost completely unsupervised!!!
70
f.Where we know algorithms we know the neural implementation (sort of): vestibular system, sound localization, echolocation, addition This is not a coincidence!!!! Remember David Marr: 1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)
71
What we know: my score (1-10). a.Anatomy.5 b.Single neurons.6 c.The neural code.6 d.Recurrent networks of spiking neurons.3 e.Learning.2 The hard problems: 1.How does the brain extract latent variables?1.001 2.How does it manipulate latent variables?1.002 3.How does it learn to do both?1.001
72
Outline: 1.Basics: single neurons/axons/dendrites/synapses.Latham 2.Language of neurons: neural coding.Sahani 3.Learning at network and behavioral level.Dayan 4.What we know about networks (very little). Latham
73
Outline for this part of the course (biophysics): 1.What makes a neuron spike. 2.How current propagates in dendrites. 3.How current propagates in axons. 4.How synapses work. 5.Lots and lots of math!!!
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.