Presentation is loading. Please wait.

Presentation is loading. Please wait.

PMAI 2003/04 Targil 1: Basic Probability (see further details in handouts)

Similar presentations


Presentation on theme: "PMAI 2003/04 Targil 1: Basic Probability (see further details in handouts)"— Presentation transcript:

1 PMAI 2003/04 Targil 1: Basic Probability (see further details in handouts)

2 Random Variables and Probability A Random variable X describes a value that is the result of a process which can not be determine in advance. The shape of the card we draw from a deck is a random variable. The Sample space S of a random variable is the set of all possible values of the variable. An event is a subset of S. A random variable is not necessarily random in the true sense of the word. Usually we deal with random variables which obey some law of nature, or a background probability. Still, we can not determine it’s value, but the chance of encountering a certain value. The probability function p(x) defines this chance.

3 Random Variables and Probability (2) Probability is our way of formally describing our world and evaluating uncertainties.We define our world  as a set of random variables and divide it into elementary events, or states. The states are mutually exclusive. A probability space holds the following properties: For a continuous random variable, p(x) is the distribution density function: Note: p(x) can be greater then 1 because it is not a probability value.

4 Expectation, Variance and Deviation The moments of a random variable define important characteristics of random variables: The first moment is the expectation  : Note: The expectation has a misleading name and is not always the value we expect to see most. In the case of the number on a dice the expectation is 3.5 which is not a value we will see at all!. The expectation is as a weighted average. The variance is defined by Var[x] = - 2 = M 2 - M 1 2. The standard deviation  = Var[x] 1/2 evaluates the “spread factor”or x in relation to the mean.

5 Conditional probability and Bayes’ Rule The knowledge that a certain event occurred can change the probability that another event will occur. P(x|y) denotes the conditional probability that x will happen given that y has happened. Bayes’ rule states that: The complete probability formula states that P(A) = P(A|B)P(B) + P(A|  B)P(  B) or in the more general case Note: P(A) =  P(A|B) + (1-  )P(A|  B) mirrors our intuition that the unconditional probability of an event is somewhere between it’s conditional probability based on two opposing assumptions.

6 Independence between random variables Two random variables X and Y are independent if knowledge about X does not change the uncertainty about Y and vice versa. Formally for a probability distribution P: Ind(X;Y)  P(X|Y) = P(X) (and symmetrically)  If Ind(X;Y) holds then P(X,Y) = P(X)P(Y)  If Ind(X;Y,Z) holds then P(X,Y,Z)=P(X)P(Y,Z)  Ind(X;Y,Z)  Ind(X;Y) and Ind(X;Z) but not the other way around!  If Ind(X;Y|Z) holds then P(X,Y|Z)=P(X|Z)P(Y|Z) or P(X|Y,Z)=P(X|Z)

7 The importance of conditional probability: Simpson’s Paradox The following table describes the effectiveness of a certain drug on a population: The ratio of recovery for the whole population increases from 40/50 to 105/90. Surprisingly, the ratio of recovery decreased for males as well as for females. How can this be? The paradox lies in ignoring the context, or the condition in which the results where given. If we had written the facts in terms of correct conditional probabilities (in a 50% male population) instead of absolute numbers we would get: P(Recovery|Drug) = 1/2*15/55 + 1/2*90/140 = 0.4578

8 Stating your facts carefully: The Three Prisoners Paradox Three prisoners A,B and C have been tried for murder and are expecting their sentence in the morning. They are told that only one of them will be convicted and that the others will go free. They also know that the guard knows which one of them is to be convicted. At night, prisoner A call the guard and asks him to pass a letter to one of his innocent friends (there is at least one so the guard’s action should not change anything). After a while, A asks the guard to tell him the identity of the prisoner who received the letter. Again, the guard agrees to tell him it is B as it does not appear to change anything. A, in his bunk, thinks: “Before I talked to the guard my chances of being executed were 1/3. Now that I know that B is innocent my chances of dying have gone from 33% to 50%. I made sure I did not ask anything about my own fate. What did I do wrong?”

9 Stating your facts carefully: The Three Prisoners Paradox (2) Let us try and work out the problem in probabilistic terms. Let G A denote the event of A being guilty and I B of B being innocent.P(I B |G A )=1: worse than that, we would have gotten the same answer if the guard said that C is innocent. How can this be? The paradox comes from not defining the context correctly. I B was concluded from I B ’=“The guard will declare B innocent”. P(I B ’)=1/2 and not 2/3. Rewriting the equations we get:

10 Bayes’ is fundamental in learning when viewed in terms of evidence and hypothesis: This allows us to update our belief in a hypothesis based on prior belief and in response to evidence. Bayes’ rule and inference prior probability posterioir probability likelyhood

11 Binomial and Poisson distributions Assume we perform n independent experiments (Bernoulli’s) with a probability p for “success” and 1-p for “failure” The distribution on the number of “success” k, called the binomial distribution is given by: In the limit where(constant), we approximate p n (k) by the Poisson distribution: the Poisson distribution is especially important in physical processes in time (or space) such as the firing rate of a neuron, etc.

12 The Normal distribution The one-dimensional normal distribution, characterized by the expectation  and standard deviation  is: This distribution is a good approximation for the binomial distribution where n is large (and np>>0) and in the “vicinity”of . The normal distribution has desired computational properties which we will meet later in the course. For example the distribution of a sum of guassians (variables that distribute normally) is itself a normal distribution with simply calculated expectations and variances.

13 Important inequalities Markov’s inequality: let x be a positive random variable with an expectation  = E(x). Then for every  0: the intuition is that as is further away from the expectation, the probability for sample with x  decreases significantly: substituting y=(x-  ) 2 we get Chevichev’s inequality: for every  >0

14 Important inequalities Substitutingin Chevichev’s inequality we get: does remind you of anything? Chernoff’s inequality is used to bound deviations from the expectation for a sum of Bernoulli’s variables: S=X 1 +…+X m where p(X i =1)=p. The additive bound (Heoffding) is: and the multiplictive bounds:


Download ppt "PMAI 2003/04 Targil 1: Basic Probability (see further details in handouts)"

Similar presentations


Ads by Google