CS a priori and Experimental Probabilities Ranges and Odds In both cases the minimum probability is 0 a priori – event cannot occur a blue ball is drawn throwing 7 on an ordinary dice experimental – event has not occurred In both cases the maximum probability is 1 a priori – event must occur sun will rise tomorrow (knowledge of physics) experimental – the event always has occurred the sun has risen every day in my experience For two events, express as odds: odds on a red ball are 3/10 to 7/10 i.e. 3 to 7
CS Question I think theres a better than evens chance that English Jim will win the 2:50 at Exeter today. How likely is it to snow tomorrow? What kind of probability are we talking about here? a priori? experimental? Neither future event so no experimental evidence (and only happens once) reasoning from first principles – what principles? Subjective best guess most valuable when the guesser is an expert
CS Probability of two independent events Experiment trial 1: pick a ball – event E 1 = the ball is red replace the ball in the bag trial 2: pick a ball – event E 2 = the ball is red What is the probability of the event E : E = picking two red balls on successive trials (with replacement). P( E ) = P( E 1 and E 2 ) or P( E 1 E 2 ) If the two trials are independent (the outcome of the first trial does not affect the outcome of the second) then: P( E 1 and E 2 ) = P( E 1 )* P( E 2 ) - the multiplication law
CS Probability of two red balls R R R W W W W W W W R Y Y Y N N N N N N N W N N N N N N N N N N 9 favourable outcomes out of 100 possibilities all equally likely probability of two reds = 9/100 ( = 3/10 * 3/10)
CS Probability of two non-independent events Experiment trial 1: pick a ball – event E 1 = the ball is red do not replace the ball in the bag trial 2: pick a ball – event E 2 = the ball is red What is the probability of the event E : E = picking two red balls on successive trials (without replacement). P( E 1 and E 2 ) = P( E 1 )* P( E 2 | E 1 ) the general form of the multiplication law
CS Probability of two red balls R R W W W W W W W R Y Y N N N N N N N W N N N N N N N N N first draw second draw – assuming red chosen first time 6 favourable outcomes out of 90 possibilities all equally likely probability of two reds = 6/90 ( = 3/10 * 2/9)
CS Conditional probabilities P( E 2 | E 1 ) = the probability that E 2 will happen given that E 1 has already happened. If E 2 and E 1 are (conditionally, statistically) independent then P( E 2 | E 1 ) = P( E 2 ) because the occurrence of E 1 has no effect on the occurrence of E 2 soP( E 1 and E 2 ) = P( E 1 )* P( E 2 | E 1 ) = P( E 1 )* P( E 2 ) Be careful about assuming independence.
CS (Revd. Thomas) Bayes Theorem P( E 1 and E 2 ) = P( E 1 )* P( E 2 | E 1 ) order of E 1 and E 2 is not important (probability of two red balls without replacement) P( E 2 and E 1 ) = P( E 2 )* P( E 1 | E 2 ) P( E 2 )* P( E 1 | E 2 ) = P( E 1 )* P( E 2 | E 1 ) P( E 1 )* P( E 2 | E 1 ) P( E 1 | E 2 ) = P( E 2 )
CS Bayes Theorem Why bother? Suppose E 1 is a particular disease (say measles) Suppose E 2 is a particular disease (say spots) P( E 1 ) is the probability of a given patient having measles (before you see them) P( E 2 ) is the probability of a given patient having spots (for whatever reason) P( E 2 | E 1 ) is the probability that someone who has measles has spots Bayes theorem allows us to calculate the probability: P( E 1 | E 2 ) = P (patient has measles given that the patient has spots)
CS Probability of two mutually exclusive events Experiment bag contains 3 red balls, 7 white balls and 5 blue balls trial: pick a ball event E 1 = the ball is red event E 2 = the ball is blue What is the probability of the event E = picking either a red ball or a blue ball P( E ) = P( E 1 or E 2 ) or P( E 1 E 2 ) If the two events are mutually exclusive (they cant both happen in the same trial) then: P( E 1 or E 2 ) = P( E 1 ) + P( E 2 ) - the addition law
CS Additive probabilities outcomes correspond to event E = E 1 or E 2 36 possible outcomes probability = 9/36 (= ¼ ) = 5/36 + 4/36 = P(8) + P(9) Probability of throwing 8 ( E 1 ) or 9 ( E 2 ) with two dice:
CS Probability of two non-mutually exclusive events ,1 1,2 1,3 1,4 1,5 1,6 2 2,1 2,2 2,3 2,4 2,5 2,6 3 3,1 3,2 3,3 3,4 3,5 3,6 4 4,1 4,2 4,3 4,4 4,5 4,6 5 5,1 5,2 5,3 5,4 5,5 5,6 6 6,1 6,2 6,3 6,4 6,5 6,6 11 outcomes correspond to event E E = E 1 or E 2 or [implicitly] ( E 1 and E 2 ) 36 possible outcomes probability = 11/36 = 6/36 + 6/36 – 1/36 E = Probability of throwing at least one 5 with two dice E1 = 5 on first dice; E2 = 5 on second dice
CS Discrete random variables Variable an attribute or property of an element which has a value Discrete variable a variable where the possible values are countable Discrete random variable a discrete variable where the values taken are associated with a probability Discrete probability distribution formed by the possible values and their probabilities P ( X = ) =...
CS Example – fair coin Let variable ( X ) be number of heads in one trial (toss). Values are 0 and 1 Probability distribution: Value of X ( i ) 0 1 P ( X = i )
CS Example – total of two dice Let variable ( X ) be total when two dice are tossed. Values are 2, 3, 4,... 11, 12 Probability distribution: P ( X = 2 ) = 1/36 P ( X = 3 ) = 2/
CS Bernouilli probability distribution Bernoulli trial: one in which there are only two possible and exclusive outcomes; call the outcomes success and failure; consider the variable X which is the number of successes in one trial Probability distribution: Value of X ( i ) 0 1 P ( X = i ) (1-p) p
CS Binomial distribution Carry out n Bernoulli trials; in each trial the probability of success is p X is the number of successes in n trials? Possible values are: 0, 1, 2,... i,... n -1, n We state (without proving) that the distribution of X is: P(X=r) = n C r p r (1-p) n-r n! n C r = n! = n (n-1) (n-2) r! (n-r)!
CS Example of binomial distribution Probability of r heads in n trials ( n = 20) P(X = r)
CS Example of binomial distribution Probability of r heads in n trials ( n = 100) P(X = r)
CS Probability of being in a given range Probability of r heads in n trials ( n = 100) P(0 X < 5) P(5 X < 10)... Moving towards the notion of a continuous distribution
CS Probability of being in a given range P(55 X < 65) = P(55 X < 60) + P(60 X < 65) (mutually exclusive events) Add the heights of the columns