Chapter 4: Joint and Conditional Distributions

Slides:



Advertisements
Similar presentations
Chapter 2 Multivariate Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Advertisements

Random Variables ECE460 Spring, 2012.
Joint and marginal distribution functions For any two random variables X and Y defined on the same sample space, the joint c.d.f. is For an example, see.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Multivariate Distributions
Probability Theory STAT 312 STAT 312 Dr. Zakeia AlSaiary.
Statistics Lecture 18. Will begin Chapter 5 today.
1 Random Variables and Discrete probability Distributions Chapter 7.
Probability Theory Part 2: Random Variables. Random Variables  The Notion of a Random Variable The outcome is not always a number Assign a numerical.
Probability Densities
Today Today: More of Chapter 2 Reading: –Assignment #2 is up on the web site – –Please read Chapter 2 –Suggested.
Sections 4.1, 4.2, 4.3 Important Definitions in the Text:
Tch-prob1 Chapter 4. Multiple Random Variables Ex Select a student’s name from an urn. S In some random experiments, a number of different quantities.
A random variable that has the following pmf is said to be a binomial random variable with parameters n, p The Binomial random variable.
Continuous Random Variables and Probability Distributions
The joint probability distribution function of X and Y is denoted by f XY (x,y). The marginal probability distribution function of X, f X (x) is obtained.
Lecture II-2: Probability Review
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
Joint Probability distribution
Jointly distributed Random variables
1 Random Variables and Discrete probability Distributions SESSION 2.
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
Joint Distribution of two or More Random Variables
Pairs of Random Variables Random Process. Introduction  In this lecture you will study:  Joint pmf, cdf, and pdf  Joint moments  The degree of “correlation”
Starter The probability distribution of a discrete random variable X is given by: P(X = r) = 30kr for r = 3, 5, 7 P(X = r) = 0 otherwise What is the value.
Jointly Distributed Random Variables
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
Random Variables and Discrete probability Distributions
Chapter 3 Random vectors and their numerical characteristics.
1 Lecture 4. 2 Random Variables (Discrete) Real-valued functions defined on a sample space are random vars. determined by outcome of experiment, we can.
CPSC 531: Probability Review1 CPSC 531:Probability & Statistics: Review II Instructor: Anirban Mahanti Office: ICT 745
Random Variables. A random variable X is a real valued function defined on the sample space, X : S  R. The set { s  S : X ( s )  [ a, b ] is an event}.
STA347 - week 51 More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function.
Multiple Random Variables Two Discrete Random Variables –Joint pmf –Marginal pmf Two Continuous Random Variables –Joint Distribution (PDF) –Joint Density.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
CS433 Modeling and Simulation Lecture 03 – Part 01 Probability Review 1 Dr. Anis Koubâa Al-Imam Mohammad Ibn Saud University
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Random Variables.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Chapter 3 Discrete Random Variables and Probability Distributions  Random Variables.2 - Probability Distributions for Discrete Random Variables.3.
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.
Conditional Probability Mass Function. Introduction P[A|B] is the probability of an event A, giving that we know that some other event B has occurred.
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Review of Statistics I: Probability and Probability Distributions.
Chapter 5a:Functions of Random Variables Yang Zhenlin.
Chapter 2: Random Variable and Probability Distributions Yang Zhenlin.
1 Probability and Statistical Inference (9th Edition) Chapter 4 Bivariate Distributions November 4, 2015.
Chapter 3: Special Distributions Yang Zhenlin.
Chapter 2: Probability. Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance.
F Y (y) = F (+ , y) = = P{Y  y} 3.2 Marginal distribution F X (x) = F (x, +  ) = = P{X  x} Marginal distribution function for bivariate Define –P57.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Chapter 31 Conditional Probability & Conditional Expectation Conditional distributions Computing expectations by conditioning Computing probabilities by.
Virtual University of Pakistan Lecture No. 26 Statistics and Probability Miss Saleha Naghmi Habibullah.
1 Two Discrete Random Variables The probability mass function (pmf) of a single discrete rv X specifies how much probability mass is placed on each possible.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
1 Chapter 4 Mathematical Expectation  4.1 Mean of Random Variables  4.2 Variance and Covariance  4.3 Means and Variances of Linear Combinations of Random.
Random Variables By: 1.
12.SPECIAL PROBABILITY DISTRIBUTIONS
Expectations of Random Variables, Functions of Random Variables
CHAPTER 2 RANDOM VARIABLES.
PRODUCT MOMENTS OF BIVARIATE RANDOM VARIABLES
Keller: Stats for Mgmt & Econ, 7th Ed
ASV Chapters 1 - Sample Spaces and Probabilities
How accurately can you (1) predict Y from X, and (2) predict X from Y?
Chap 8 Bivariate Distributions Ghahramani 3rd edition
Analysis of Engineering and Scientific Data
Chapter 2. Random Variables
Presentation transcript:

Chapter 4: Joint and Conditional Distributions Yang Zhenlin zlyang@smu.edu.sg http://www.mysmu.edu/faculty/zlyang/

Chapter Contents Joint Distribution Special Joint Distributions: Multinomial and Bivariate Normal Covariance and Correlation Coefficient Conditional Distribution Conditional Expectation Conditional Variance

Introduction In many applications, more than one variables are needed for describing a quantity or a phenomenon of interest, e.g., To describe the size of a man, one needs at least height (X) and weight (Y). To describe a point in a rectangle, one needs X coordinate and Y coordinate. In general, the set of k r.v.s correspond to the same “unit”, defined on the same sample space and taking values in a k-dimensional Euclidean space. In this chapter, we focus mainly on the case of two r.v.s, and deal separately with two cases: both X and Y are discrete both X and Y are continuous

Joint Distributions Definition 4.1. (Joint CDF) The joint cumulative distribution function of r.v.s X and Y is the function defined by F(x, y) = P(X  x, Y  y ). Definition 4.1 extends naturally to cases of more than two r.v.s. It applies to both discrete and continuous r.v.s. Definition 4.2. Let X and Y be two discrete random variables defined on the same sample space. The joint probability mass function of X and Y is defined to be p(x, y) = P(X = x, Y = y) for all possible values of X and Y. Definition 4.2 extends directly to cases of more than two r.v.s

Joint Distributions Example 4.1. Xavier and Yvette are two real estate agents. Let X and Y denote the number of houses that Xavier and Yvette will sell next week, respectively. Suppose that there only four houses for sale next week. The joint probability mass function and its graph are presented below. Find P(X  1, Y  1) and P(Y  1). p(x, y) X Y 1 2 .12 .42 .06 .21 .03 .07 .02 .01 Answer: P(X  1, Y  1) =.06+.03+.02+.01 = 0.12, P(Y  1) =.21+.06+.03+.07+.02+.01 = .40

Joint Distributions Example 4.2. A bin contains 1000 flower seeds, of which 400 are red, 400 are white and 200 are pink. Ten seeds are selected at random without replacement. Let X be the number of red flower seeds and Y be the number of white flower seeds being selected. (a) Find the joint pmf of X and Y. (b) Calculate P(X = 2, Y = 3) and P(X = Y). Solution: (a) From the counting techniques in Chapter 1, we obtain (b)

Joint Distributions A function p(x, y) is said to be the joint pmf of discrete r.v.s X and Y if and only if for all possible values (x, y), (i) p(x, y)  0 and (ii) Example 4.3. Let the joint pmf of X and Y be give by (a) Find the value of the constant k. (b) Calculate P(X > Y), P(X + Y  4), and P(Y  X). Solution: (a) = k[(12 + 12) + (12 + 22 ) + (22 + 32 ) + (32 + 32)] = 38 k,  k = 1/38. (b) P(X > Y) = 0, P(X + Y  4) = 7/38, and P(Y  X) = 1.

Joint Distributions Definition 4.3. A function f(x, y) is said to be the joint probability density function of the continuous r.v.s X and Y if the joint CDF of X and Y can be written as A function f(x, y) is said to be the joint pdf of continuous r.v.s X and Y if and only if for all possible values (x, y), (i) f(x, y)  0 and (ii) Marginal pmf: Marginal pdf:

Joint Distributions Example 4.4. Let the joint pdf be given by (a) Find the value of the constant k. (b) Find the marginal pdfs of X and Y. (c) Calculate P(X + Y < 1), P(2X < Y), and P(X = Y). Solution: Some points to note: Finding the constant k and probabilities are matters of double integration, It is important to draw regions on which integrations are desired, so that the integration limits can be determined.

Joint Distributions (a) (b) (c)

P(X  x, Y  y) = P(X  x) P(Y  y) Joint Distributions Definition 4.4. Two random variables X and Y are said to be independent if and only if P(X  x, Y  y) = P(X  x) P(Y  y) for all possible values (x, y) of (X, Y).

Joint Distributions Note: This definition states that X and Y are independent if and only if their joint CDF can be written as the product of their marginal CDfs, i.e., F(x, y) = FX(x) FY(y). When X and Y are both discrete, the independence condition can be written as P(X = x, Y = y) = P(X = x) P(Y = y), for all x and y, i.e., the joint pmf is the product of the marginal pmfs. When X and Y are both continuous, the independence condition can be written as f(x, y) = fX(x) fY(y), i.e., joint pdf is the product of the marginal pdfs. Definition 4.4 extends naturally to the cases of many random variables

Joint Distributions Example 4.5. Stores A and B, which belong to the same owner, are located in two different towns. If the probability density function of the weekly profit of each store, in thousand dollars, is given by and the profit of one store is independent of the other, what is the probability that next week one store makes at least $500 more than the other store? Solution: Let X and Y denote, respectively, next week’s profits of stores A and B. The desired probability is P(X  Y + 1/2) + P(Y  X + 1/2) Since X and Y are independent, by symmetry, P(X  Y + 1/2) + P(Y  X + 1/2) = 2 P(X  Y + 1/2) To calculate P(X  Y + 1/2), we need the joint pdf of X and Y. Since X and Y are independent, we have,

Joint Distributions To find P(X > Y + 1/2), one needs to integrate f(x, y) on a region defined by the conditions: 1  X  3, 1  Y  3, and X  Y + 1/2. Example 4.6. Prove that the two random variables X and Y with the following joint probability density function are not independent.

Special Joint Distributions Certain special joint distributions such as multinomial and bivariate normal deserve some detailed attention. Multinomial is a direct generalization of the binomial. An experiment has k possible outcomes with probabilities 1, 2,  , k. Let Xi be the number of times that the ith outcome occurs among a total of n independent trials of such an experiment, i = 1, 2,  , k. Then the joint distribution of X1, X2, . . . , Xk is called the Multinomial Distribution with the joint pmf of the following form: where 1 + 2 + . . . + k = 1, and x1 + x2 + ... + xk = n.

Special Joint Distributions Plots of Bivariate Normal pdf A Bivariate Normal distribution has the following joint pdf: f(x1,x2) = Plots of Bivariate Normal pdf µ1 = µ2 = 0, 1 = 2 =1,  = 0.9 µ1 = µ2 = 0, 1 = 2 =1,  = 0.1

Special Joint Distributions It can be shown that  is the correlation coefficient between X1 and X2. When  =0, we have So, in this case, X1 and X2 are independent. For two normal random variables, if they are uncorrelated (or covariance is zero), then they are independent. This conclusion may not apply to other random variables.

Covariance and Correlation Coefficient Definition 4.5. The covariance between any two jointly distributed r.v.s X and Y, denoted by Cov(X, Y), is defined by Cov(X, Y) = E[(X µX)(Y µY)] = E[XY]  µX µY where µX = E[X] and µY = E[Y] Properties of Covariance: For any two r.v.s X and Y, and constants a, b, c and d, Cov(X, X) = Var(X) Cov(X, Y) = Cov(Y, X) Cov(aX+b, cY+d) = ac Cov(X, Y) If X and Y are independent then Cov(X, Y) = 0.

Covariance and Correlation Coefficient Definition 4.6. The correlation coefficient between any two jointly distributed r.v.s X and Y, denoted by (X, Y), is defined by It measures the degree of association between X and Y, and takes values in [1, 1]. Properties of Correlation Coefficient: For any two r.v.s X and Y, and constants a, b, c and d, (aX+b, cY+d) = (X, Y), if ac > 0, =  (X, Y), if ac < 0.

Conditional Distributions One of the most useful concepts in probability theory is that of conditional probability and conditional expectation, because In practice, some partial information is often available, and hence calculations of probabilities and expectations should be conditional upon the given information; In calculating a desired probability or expectation it is often extremely useful to first “condition” on some appropriate random variables. The concept of conditional probability, P(A|B) = P(A  B)/P(B), can be extended directly to give a definition of the conditional distribution of X given Y = y, where X and Y are two r.v.s, discrete or continuous.

Conditional Distributions Definition 4.7. For two discrete r.v.s X and Y, the conditional pmf of X given Y = y is where pY(y)  0; The conditional expectation of X given Y = y is defined as Clearly, when X is independent of Y, pX|Y(x | y) = pX(x) .

Conditional Distributions Definition 4.8. For two continuous r.v.s X and Y, the conditional pdf of X given Y = y is where fY(y)  0; The conditional expectation of X given Y = y is defined as Example 4.7. Roll a fair die successfully. Let X be the number of rolls until first 4 and Y be the number of rolls until first 5. Find the conditional pmf of X given Y = 4. Calculate P(X > 2 | Y = 4). (c) Calculate E[X | Y = 4]

Conditional Distributions Solution:

Conditional Distributions

Conditional Distributions Example 4.8. The joint pdf of X and Y is given by (a) Compute the conditional pdf of X given Y=y, where 0  y  1. (b) Calculate P(X > 0.5 | Y = 0.5) . (c) Calculate E(X | Y = 0.5).

Conditional Distributions Definition 4.9. Let X and Y be jointly distributed r.v.s. The conditional variance of X given Y = y, is given by, Var(X | Y = y) = E[(X  µX|Y)2 | Y = y], where µX|Y = E(X | Y = y).