Lecture 9 5.3 Discrete Probability. 5.3 Bayes’ Theorem We have seen that the following holds: We can write one conditional probability in terms of the.

Slides:



Advertisements
Similar presentations
Lecture Discrete Probability. 5.2 Recap Sample space: space of all possible outcomes. Event: subset of of S. p(s) : probability of element s of.
Advertisements

Lecture Discrete Probability. 5.3 Bayes’ Theorem We have seen that the following holds: We can write one conditional probability in terms of the.
Discrete Probability Chapter 7.
Lecture (7) Random Variables and Distribution Functions.
© 2004 Prentice-Hall, Inc.Chap 5-1 Basic Business Statistics (9 th Edition) Chapter 5 Some Important Discrete Probability Distributions.
Chapter 5 Some Important Discrete Probability Distributions
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Basic Business Statistics.
© 2003 Prentice-Hall, Inc.Chap 5-1 Basic Business Statistics (9 th Edition) Chapter 5 Some Important Discrete Probability Distributions.
ฟังก์ชั่นการแจกแจงความน่าจะเป็น แบบไม่ต่อเนื่อง Discrete Probability Distributions.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Statistics.
Random Variables and Expectation. Random Variables A random variable X is a mapping from a sample space S to a target set T, usually N or R. Example:
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Section 7.4 (partially). Section Summary Expected Value Linearity of Expectations Independent Random Variables.
22C:19 Discrete Structures Discrete Probability Fall 2014 Sukumar Ghosh.
April 2, 2015Applied Discrete Mathematics Week 8: Advanced Counting 1 Random Variables In some experiments, we would like to assign a numerical value to.
Lec 18 Nov 12 Probability – definitions and simulation.
Chapter 4 Discrete Random Variables and Probability Distributions
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 4-1 Introduction to Statistics Chapter 5 Random Variables.
Prof. Bart Selman Module Probability --- Part d)
Great Theoretical Ideas in Computer Science.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Statistics.
1 Variance of RVs Supplementary Notes Prepared by Raymond Wong Presented by Raymond Wong.
1 Random Variables Supplementary Notes Prepared by Raymond Wong Presented by Raymond Wong.
Probability Mass Function Expectation 郭俊利 2009/03/16
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 22April 1, 2004Carnegie Mellon University
The Binomial Distribution Permutations: How many different pairs of two items are possible from these four letters: L, M. N, P. L,M L,N L,P M,L M,N M,P.
Expected values and variances. Formula For a discrete random variable X and pmf p(X): Expected value: Variance: Alternate formula for variance:  Var(x)=E(X^2)-[E(X)]^2.
COMP 170 L2 L18: Random Variables: Independence and Variance Page 1.
1 Lecture 4. 2 Random Variables (Discrete) Real-valued functions defined on a sample space are random vars. determined by outcome of experiment, we can.
Section 7.2. Section Summary Assigning Probabilities Probabilities of Complements and Unions of Events Conditional Probability Independence Bernoulli.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Basic Business Statistics.
Great Theoretical Ideas in Computer Science.
Random Variables. A random variable X is a real valued function defined on the sample space, X : S  R. The set { s  S : X ( s )  [ a, b ] is an event}.
X = 2*Bin(300,1/2) – 300 E[X] = 0 Y = 2*Bin(30,1/2) – 30 E[Y] = 0.
COMP 170 L2 L17: Random Variables and Expectation Page 1.
Chapter 5: Probability Analysis of Randomized Algorithms Size is rarely the only property of input that affects run time Worst-case analysis most common.
Discrete Mathematics Math 6A Homework 6 Solution.
22C:19 Discrete Structures Discrete Probability Spring 2014 Sukumar Ghosh.
COMPSCI 102 Introduction to Discrete Mathematics.
Probability Distributions
Discrete Probability Distributions
Bernoulli Trials, Geometric and Binomial Probability models.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Business Statistics,
Chapter 7 With Question/Answer Animations. Chapter Summary Introduction to Discrete Probability Probability Theory Bayes’ Theorem Expected Value and Variance.
Lecturer : FATEN AL-HUSSAIN Discrete Probability Distributions Note: This PowerPoint is only a summary and your main source should be the book.
Random Variables Lecture Lecturer : FATEN AL-HUSSAIN.
1. 2 At the end of the lesson, students will be able to (c)Understand the Binomial distribution B(n,p) (d) find the mean and variance of Binomial distribution.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Section 7.3. Why we need Bayes?  How to assess the probability that a particular event occurs on the basis of partial evidence.  The probability p(F)
Chapter 7. Section 7.1 Finite probability  In a lottery, players win a large prize when they pick four digits that match, in the correct order, four.
Basic statistics Usman Roshan.
ICS 253: Discrete Structures I
Chapter 6: Discrete Probability
Introduction to Discrete Mathematics
Basic statistics Usman Roshan.
Random Variables.
Applied Discrete Mathematics Week 11: Relations
22C:19 Discrete Math Discrete Probability
CS104:Discrete Structures
Conditional Probability
Discrete Probability Chapter 7 With Question/Answer Animations
Now it’s time to look at…
Discrete Mathematical Structures CS 23022
Chebychev, Hoffding, Chernoff
Applied Discrete Mathematics Week 12: Discrete Probability
5. Conditioning and Independence
Discrete Probability Chapter 7.
Chapter 11 Probability.
Presentation transcript:

Lecture Discrete Probability

5.3 Bayes’ Theorem We have seen that the following holds: We can write one conditional probability in terms of the other: Bayes’ Theorem

5.3 Example: What is the probability that a family with 2 kids has two boys, given that they have at least one boy? (all possibilities are equally likely).  S: all possibilities: {BB, GB, BG, GG}. E: family has two boys: {BB}. F: family has at least one boy: {BB, GB, BG}. E F = {BB} p(E|F) = (1/4) / (3/4) = 1/3 E BB F GB BG GG Now we compute the probability of P(F|E), what is the probability that a family with two boys has at least one boy ? P(F|E) = P(E|F) P(F) / P(E) = 1/3 * ¾ / ¼ = 1

5.3 Expected Values The definition of an expected value of a random variable is: This equivalent to: Example: What is the expected number of heads if we toss a fair coin n times?  We know that the distribution for this experiment is the Binomial distribution:

5.3 Therefore we need to compute:

5.3 Expectation are linear: Theorem: E(X1+X2) = E(X1) + E(X2) E(aX + b) = aE(X) + b Examples: 1) Expected value for the sum of the values when a pair of dice is rolled:  X1 = value of first die, X2 value of second die: E(X1+X2) = E(X1) + E(X2) = 2 * ( )/6 = 7. 2) Expected number of heads when a fair coin is tossed n times (see example previous slide)  Xi is the outcome coin toss i. Each has a probability of p of coming up heads. linearity: E(X1+...+Xn) = E(X1) E(xn) = n p.

5.3 More examples: A person checking out coats mixed the labels up randomly. When someone collects his coat, he checks out a coat randomly from the remaining coats. What is the expected number of correctly returned coats? There are n coats checked in.  Xi = 1 of correctly returned, and 0 if wrongly returned. Since the labels are randomly permuted, E(Xi) = 1/n E(X1+...Xn) = n 1/n = 1 (independent of the number of checked in coats)

5.3 Geometric distribution Q: What is the distribution of waiting times until a tail comes up, when we toss a fair coin? A: Possible outcomes: T, HT, HHT, HHHT, HHHHT,.... (infinitely many possibilities) P(T) = p, P(HT) = (1-p) p, P(HHT) = (1-p)^2 p,.... geometric distribution Normalization: (matlab) X(s) = number of tosses before success.

5.3 Geometric Distr. Here is how you can compute the expected value of the waiting time:

5.3 Independence Definition: Two random variables X(s) and Y(s) on a sample space S are independent if the following holds: Examples 1) Pair of dice is rolled. X1 is value first die, X2 value second die. Are these independent?  P(x1=r1) = 1/6 P(X2=r2)=1/6 P(X1=r1 AND X2=r2)=1/36 = P(X1=r1) P(X2=r2):  YES independent. 2) Are X1 and X=X1+X2 independent?  P(X=12) =1/36 P(X1=1)=1/6 P(X=12 AND X1=1)=0 which is not the product: P(X=12) P(X1=1)

5.3 Independence Theorem: If two random variables X and Y are independent over a sample space S then: E(XY)=E(X) E(Y). (proof, read book) Note1: The reverse is not true: Two random variables do not have to be independent for E(XY)=E(X)E(Y) to hold. Note2: If 2 random variables are not independent, it follows that E(XY) does not have to be equal to E(X)E(Y), although it might still happen. Example: X counts number of heads when a coin is tossed twice: P(X=0) =1/4 (TT) P(X=1)=1/2 (HT,TH) P(X=2) =1/4 (HH). E(X) = 1x½+2x1/4=1. Y counts the number of tails: E(Y)=1 as well (symmetry, switch role H,T). However, P(XY=0) = 1/2 (HH,TT) P(XY=1) =1/2 (HT,TH) E(XY) = 0x1/2 + 1x1/2=1/2

5.3 Variance The average of a random variable tells us noting about the spread of a probability distribution. (matlab demo) Thus we introduce the variance of a probability distribution: Definition: The variance of a random variable X over a sample space S is given by: variance standard deviation (this is the width of the distribution)

5.3 Variance Theorem: For independent random variables the variances add: (proof in book) Example: 1) We toss 2 coins, Xi(H)=1, Xi(T)=0. What is the STD of X=X1+X2?  X1 and X2 are independent. V(X1+X2)=V(X1)+V(X2)=2V(X1) E(X1)=1/2 V(X1) = (0-1/2)^2 x ½ + (1-1/2)^2 x ½ =1/4 V(X) = ½ STD(X)=sqrt(1/2).

5.3 Variance What is the variance of the number of successes when n independent Bernoulli trials are performed. V(X) = V(X1+...+Xn)=nV(X1) V(X1) = (0-p)^2 x (1-p) + (1-p)^2 x p = p^2(1-p) + p(1-p)^2=p(1-p) V(X)=np(1-p) (matlab demo)

5.3 Chebyshev’s Inequality Theorem: X is random variable on sample space S, and P(X=r) it’s probability distribution. Then for any positive real number r: (proof in book) In words: the probability of finding a value of X farther away from the mean than r is smaller than the variance divided by r^2. r

5.3 Example: What is the probability that with 100 Bernoulli trials we find more than 89 or less than 11 successes when the prob. of success is ½. X counts number of successes. EX=100 x ½ =50 V(X) = 100 x ½ x ½ = 25. P(|X-50|>=40)<=25/40^2 = 1/64