Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Erklären Sie den Unterschied zwischen P(t,s), P(s,t), P(t|s) und P(s|t). Gibt es hier Gleichheiten zwischen irgendwelchen dieser Terme? Wann gilt P(s,t)

Similar presentations


Presentation on theme: "1 Erklären Sie den Unterschied zwischen P(t,s), P(s,t), P(t|s) und P(s|t). Gibt es hier Gleichheiten zwischen irgendwelchen dieser Terme? Wann gilt P(s,t)"— Presentation transcript:

1 1 Erklären Sie den Unterschied zwischen P(t,s), P(s,t), P(t|s) und P(s|t). Gibt es hier Gleichheiten zwischen irgendwelchen dieser Terme? Wann gilt P(s,t) = P(s) x P(t)? Erklären Sie die nachfolgende Formel intuitiv: what we see: what our brain “sees”: Erklären Sie den Unterschied zwischen „We“ und „Brain“ view hier. Beide sehen P(t,s) [linke Seite] aber beide sehen dies auf verschiedene Weise [rechte Seite]. Wieso? Wie mißt man P(t,s) direkt? Wie mißt man P(s,t) direkt? Vervollständigen Sie die Bayes Formel für: P(s|t)= ???? Warum heißt die „reverse Korrelation“ so? 32

2 2 Neural Codes

3 3 Neuronal Codes – Action potentials as the elementary units voltage clamp from a brain cell of a fly

4 4 Neuronal Codes – Action potentials as the elementary units voltage clamp from a brain cell of a fly after band pass filtering

5 5 Neuronal Codes – Action potentials as the elementary units voltage clamp from a brain cell of a fly after band pass filtering generated electronically by a threshold discriminator circuit

6 6 Neuronal Codes – Probabilistic response and Bayes’ rule stimulus spike trains conditional probability: Given a stimulus, how likely is it to observe a certain spike train?

7 7 Neuronal Codes – Probabilistic response and Bayes’ rule conditional probability ensembles of signals natural situation: joint probability: experimental situation: we choose s(t) prior distribution joint probability Asking (left side): How frequently do we observe stimulus and spike train TOGETHER? Answer (right side): As often as this spike train probably follows the stimulus presentation (1st term) times the probability of having presented the stimulus out of many stimuli which we use (right term).

8 8 Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the probability of A, given B". Joint probability is the probability of two events in conjunction. That is, it is the probability of both events together. The joint probability of A and B is written P(A,B). Consider the simple scenario of rolling two fair six-sided dice, labelled die 1 and die 2. Define the following three events: A: Dice 1 lands on 3. B: Dice 2 lands on 1. C: The dice sum to 8. The prior probability of each event describes how likely the outcome is before the dice are rolled, without any knowledge of the roll's outcome. For example, die 1 is equally likely to fall on each of its 6 sides, so P(A) = 1 / 6. Similarly P(B) = 1 / 6. Likewise, of the 6 × 6 = 36 possible ways that two dice can land, and only 5 of them result in a sum of 8 (namely 2 and 6, 3 and 5, 4 and 4, 5 and 3, and 6 and 2), so P(C) = 5 / 36. Prior

9 9 A: Dice 1 lands on 3. B: Dice 2 lands on 1. C: The dice sum to 8. Some of these events can both occur at the same time; for example events A and C can happen at the same time, in the case where dice 1 lands on 3 and dice 2 lands on 5. This is the only one of the 36 outcomes where both A and C occur, so its probability is 1/36. The probability of both A and C occurring is called the joint probability of A and C and is written P(A,C)=1/36. On the other hand, if dice 2 lands on 1, the dice cannot sum to 8, so P(B,C)=0. Now suppose we roll the dice and cover up dice 2, so we can only see dice 1, and observe that dice 1 landed on 3. Given this partial information, the probability that the dice sum to 8 is no longer 5/36; instead it is 1/6, since dice 2 must land on 5 to achieve this result. This is called the conditional probability, because it's the probability of C under the condition that is A is observed, and is written P(C|A), which is read "the probability of C given A. On the other hand, if we roll the dice and cover up dice 2, and observe dice 1, this has no impact on the probability of event B, which only depends on dice 2. We say events A and B are statistically independent or just independent and in this case: P(B|A)=P(B). Joint Conditional

10 10 Neuronal Codes – Probabilistic response and Bayes’ rule But: the brain “sees” only {t i } and must “say” something about s(t) But: there is no unique stimulus in correspondence with a particular spike train thus, some stimuli are more likely than others given a particular spike train experimental situation:

11 11 Neuronal Codes – Probabilistic response and Bayes’ rule Bayes’ rule: what we see: what our brain “sees”: What is the difference: Fundamentally We know the prior P(s) as We choose the stimuli. The Brain knows the Prior P(t) as the Brain generates the spike trains. What would the animal (the percept) like to know? It would like to know: Given a spike train what is the most likely stimulus behind it? This is P(s|t).

12 12 Neuronal Codes – Probabilistic response and Bayes’ rule determine probability of a spike train from a given stimulus How to measure some distributions ? Here: The standard way! Stimulus  Response Raster Plot

13 13 Neuronal Codes – Probabilistic response and Bayes’ rule determine probability of a spike train from a given stimulus

14 14 Neuronal Codes – Probabilistic response and Bayes’ rule Nice probabilistic stuff, but SO, WHAT?

15 15 Neuronal Codes – Probabilistic response and Bayes’ rule SO, WHAT? We can characterize the neuronal code in two ways: translating stimuli into spikestranslating spikes into stimuli Bayes’ rule: (traditional approach) -> If we can give a complete listing of either set of rules, than we can solve any translation problem thus, we can switch between these two points of view (how the brain “sees” it)

16 16 Neuronal Codes – Probabilistic response and Bayes’ rule We can switch between these two points of view. And why is that important? These two points of view may differ in their complexity! Traditionally you would record this: Average spike count n dependent on the stimulus (velocity v). Stimulus  Response (as before) This is a difficult „curved“ function and requires a complex model to explain, does‘nt it??

17 17 Lets measure this in a better (more complete way): You choose P(v) and for some reason you like some stimuli better than others, which makes this peaked. Do this for all stimuli and don‘t forget to normalize all this to 1 before plotting (P(n,v). Then you record the responses (spike count n) for these stimuli. For example the red stimulus gives you after many repetitions the red response curve. Comes from motion sensitive neuron H1 in the fly’s brain: Neuronal Codes – Probabilistic response and Bayes’ rule

18 18 Neuronal Codes – Probabilistic response and Bayes’ rule Summing all values along the red arrow yields P(n) the Prior how often a certain number of spikes in general is observed. With Bayes and the knowledge of P(n) and P(v) we can get the two conditional probability curves, too.

19 19 Neuronal Codes – Probabilistic response and Bayes’ rule Easy to measure (Raster Plot) Easy to measure (but takes long) Often difficult to measure directly (use Bayes instead) Note this! Correlation, Not independent

20 20 Neuronal Codes – Probabilistic response and Bayes’ rule average number of spikes depending on stimulus velocity average stimulus depending on spike count

21 21 Neuronal Codes – Probabilistic response and Bayes’ rule average number of spikes depending on stimulus velocity average stimulus depending on spike count non-linear relation almost perfectly linear relation The left relation is MUCH easier to understand than the right one (which is the one you would have measured naively)! This is how Bayes can help. You can deduct (guess) the stimulus velocity (here linearly) from just the spike count.

22 22 Neuronal Codes – Probabilistic response and Bayes’ rule determine probability of a spike train from a given stimulus Remember ? Measuring the spike train given one stimulus is easy. Also measuring all spike trains using all kinds of stimuli (takes long though)

23 23 From P(s,t) and P(t,s) and P(t), so far we had then used Bayes to determine P(t,s). Can we also measure this ? Measuring P(s,t) was easy  see raster plot before, but how is measuring P(t,s) done?

24 24 Neuronal Codes – Probabilistic response and Bayes’ rule spikes determine the probability of a stimulus from given spike train stimuli

25 25 Neuronal Codes – Probabilistic response and Bayes’ rule determine the probability of a stimulus from given spike train Called „Reverse Correlation“ as we look always back from a spike in time asking which stimulus chunck was (just) before the spike (and could have, thus, driven the given spike).

26 26 Neuronal Codes – Probabilistic response and Bayes’ rule For a deeper discussion read, for instance, that nice, difficult book: Rieke, F. et al. (1996). Spikes: Exploring the neural code. MIT Press.


Download ppt "1 Erklären Sie den Unterschied zwischen P(t,s), P(s,t), P(t|s) und P(s|t). Gibt es hier Gleichheiten zwischen irgendwelchen dieser Terme? Wann gilt P(s,t)"

Similar presentations


Ads by Google