Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability.

Similar presentations


Presentation on theme: "Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability."— Presentation transcript:

1 Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability

2 Section 3.5Probability1 Introduction to Finite Probability If some action can produce Y different outcomes and X of those Y outcomes are of special interest, we may want to know how likely it is that one of the X outcomes will occur. For example, what are the chances of: Getting “heads” when a coin is tossed? The probability of getting heads is one out of two, or 1/2. Getting a 3 with a roll of a die? A six-sided die has six possible results. The 3 is exactly one of these possibilities, so the probability of rolling a 3 is 1/6. Drawing either the ace of clubs or the queen of diamonds from a standard deck of cards? A standard card deck contains 52 cards, two of which are successful results, so the probability is 2/52 or 1/26.

3 Section 3.5Probability2 Introduction to Finite Probability The set of all possible outcomes of an action is called the sample space S of the action. Any subset of the sample space is called an event. If S is a finite set of equally likely outcomes, then the probability P(E) of event E is defined to be: For example, the probability of flipping a coin twice and having both come up as heads is: Probability involves finding the size of sets, either of the sample space or of the event of interest. We may need to use the addition or multiplication principles, the principle of inclusion and exclusion, or the formula for the number of combinations of r things from n objects.

4 Section 3.5Probability3 Introduction to Finite Probability Using the definition of P(E) as seen in the previous slide, we can make some observations for any events E 1 and E 2 from a sample space S of equally likely outcomes:

5 Section 3.5Probability4 Probability Distributions A way to look at problems where not all outcomes are equally likely is to assign a probability distribution to the sample space. Consider each distinct outcome in the original sample space as an event and assign it a probability. If there are k different outcomes in the sample space and each outcome x i is assigned a probability p(x i ), the following rules apply: 1. 0  p(x i )  1 Because any probability value must fall within this range. 2. The union of all of these k disjoint outcomes is the sample space S, and the probability of S is 1.

6 Example For a fair die, the probability of rolling a three is 1/6. Sample space S = {1, 2, 3, 4, 5, 6}, event space E = {3} Suppose the die is loaded so that 4 comes up three times more often than any of the other numbers. Now the sample space S = {1, 2, 3, 4 1, 4 2, 4 3, 5, 6} so the probability of rolling a 3 is only 1/8 The probability distribution for the loaded die is x i 123456 p(x i )1/81/81/83/81/81/8  p(x i ) for all x i above = 1, as we know it should be. Section 3.5Probability5

7 Conditional Probability Flip a coin twice, sample space = {HH, HT, TH, TT}. The chance that we get two tails is 1/4. Let E 1 be the event that the first flip is a tail, so now E 1 = {TH, TT} and P(E 1 ) = | E 1 | / |S| = 1/2 Let E 2 be the event that the second flip is a tail; E 2 = {HT, TT}. The event of interest, tossing two tails in a row, is E 1  E 2 or {TT}. So … P(E1  E 2 ) = | E1  E2 | = 1 |S| 4 Now suppose we know that the first toss was a tail; does this change the total probability? It does, because now S is limited: S = {TH, TT}. P(E 2 | E 1 ) (probability of E 2 given E 1 ) = | E 2 | / | S | = 1/2 Section 3.5Probability6

8 Section 3.5Probability7 Conditional Probability - Definition Given events E 1 and E 2, the conditional probability of E 2 given E 1, P(E 2  E 1 ), is: For example, in a drug study of a group of patients, 17% responded positively to compound A, 34% responded positively to compound B, and 8% responded positively to both. The probability that a patient responded positively to compound B given that he or she responded positively to A is: P(B|A) = P(A  B) / P(A) = 0.08 / 0.17  0.47

9 Section 3.5Probability8 Conditional Probability If P(E 2  E 1 ) = P(E 2 ), then E 2 is just as likely to happen whether E 1 happens or not. In this case, E 1 and E 2 are said to be independent events. Then P(E 1  E 2 ) = P(E 1 ) * P(E 2 ) This can be extended to any finite number of independent events and can also be used to test whether events are independent.

10 Section 3.5Probability9 Expected Value If the values in the sample space are not numerical, we may find a function X: S  R that associates a numerical value with each element in the sample space. Such a function is called a random variable. Given a sample space S to which a random variable X and a probability distribution p have been assigned, the expected value, or weighted average, of the random variable is:

11 Section 3.5Probability10 Average Case Analysis of Algorithms Expected value may help give an average case analysis of an algorithm, i.e., tell the expected “average” amount of work performed by an algorithm. Let the sample space S be the set of all possible inputs to the algorithm. We assume that S is finite. Let the random variable X assign to each member of S the number of work units required to execute the algorithm on that input. And let p be a probability distribution on S, The expected number of work units is given as:


Download ppt "Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability."

Similar presentations


Ads by Google