Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.

Similar presentations


Presentation on theme: "Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples."— Presentation transcript:

1 Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples

2 Copyright © Cengage Learning. All rights reserved. 5.5 The Distribution of a Linear Combination

3 3 The sample mean X and sample total T o are special cases of a type of random variable that arises very frequently in statistical applications. Definition Given a collection of n random variables X 1,..., X n and n numerical constants a 1,..., a n, the rv is called a linear combination of the X i ’s. (5.7)

4 4 The Distribution of a Linear Combination For example, 4X 1 – 5X 2 + 8X 3 is a linear combination of X 1, X 2, and X 3 with a 1 = 4, a 2 = –5, and a 3 = 8. Taking a 1 = a 2 =... = a n = 1 gives Y = X 1 +... + X n = T o, and a 1 = a 2 =... = a n = yields Notice that we are not requiring the X i ’s to be independent or identically distributed. All the X i ’s could have different distributions and therefore different mean values and variances. We first consider the expected value and variance of a linear combination.

5 5 The Distribution of a Linear Combination Proposition Let X 1, X 2,..., X n have mean values  1,...,  n, respectively, and variances respectively. 1. Whether or not the X i ’s are independent, E(a 1 X 1 + a 2 X 2 +... + a n X n ) = a 1 E(X 1 ) + a 2 E(X 2 ) +... + a n E(X n ) = a 1  1 +... + a n  n 2. If X 1,..., X n are independent, V(a 1 X 1 + a 2 X 2 +... + a n X n ) (5.8) (5.9)

6 6 The Distribution of a Linear Combination And 3. For any X 1,..., X n, (5.10) (5.11)

7 7 The Distribution of a Linear Combination Proofs are sketched out at the end of the section. A paraphrase of (5.8) is that the expected value of a linear combination is the same as the linear combination of the expected values—for example, E(2X 1 + 5X 2 ) = 2  1 + 5  2. The result (5.9) in Statement 2 is a special case of (5.11) in Statement 3; when the X i ’s are independent, Cov (X i, X j ) = 0 for i  j and = V (X i ) for i = j (this simplification actually occurs when the X i ’s are uncorrelated, a weaker condition than independence). Specializing to the case of a random sample (X i ’s iid) with a i = 1/n for every i gives E(X) =  and V(X) =  2 /n. A similar comment applies to the rules for T o.

8 8 Example 29 A gas station sells three grades of gasoline: regular, extra, and super. These are priced at $3.00, $3.20, and $3.40 per gallon, respectively. Let X 1, X 2, and X 3 denote the amounts of these grades purchased (gallons) on a particular day. Suppose the X i ’s are independent with  1 = 1000,  2 = 500,  3 = 300,  1 = 100,  2 = 80, and  3 = 50.

9 9 Example 29 The revenue from sales is Y = 3.0X 1 + 3.2X 2 + 3.4X 3, and E(Y) = 3.0  1 + 3.2  2 + 3.4  3 = $5620 cont’d

10 10 The Difference Between Two Random Variables

11 11 The Difference Between Two Random Variables An important special case of a linear combination results from taking n = 2, a 1 = 1, and a 2 = –1: Y = a 1 X 1 + a 2 X 2 = X 1 – X 2 We then have the following corollary to the proposition. Corollary E(X 1 – X 2 ) = E(X 1 ) – E(X 2 ) for any two rv’s X 1 and X 2. V(X 1 – X 2 ) = V(X 1 ) + V(X 2 ) if X 1 and X 2 are independent rv’s.

12 12 The Difference Between Two Random Variables The expected value of a difference is the difference of the two expected values, but the variance of a difference between two independent variables is the sum, not the difference, of the two variances. There is just as much variability in X 1 – X 2 as in X 1 + X 2 [writing X 1 – X 2 = X 1 + (– 1)X 2, (–1)X 2 has the same amount of variability as X 2 itself].

13 13 Example 30 A certain automobile manufacturer equips a particular model with either a six-cylinder engine or a four-cylinder engine. Let X 1 and X 2 be fuel efficiencies for independently and randomly selected six-cylinder and four-cylinder cars, respectively. With  1 = 22,  2 = 26,  1 = 1.2, and  2 = 1.5, E(X 1 – X 2 ) =  1 –  2 = 22 – 26 = –4

14 14 Example 30 If we relabel so that X 1 refers to the four-cylinder car, then E(X 1 – X 2 ) = 4, but the variance of the difference is still 3.69. cont’d

15 15 The Case of Normal Random Variables

16 16 The Case of Normal Random Variables When the X i ’s form a random sample from a normal distribution, X and T o are both normally distributed. Here is a more general result concerning linear combinations. Proposition If X 1, X 2,..., X n are independent, normally distributed rv’s (with possibly different means and/or variances), then any linear combination of the X i ’s also has a normal distribution. In particular, the difference X 1 – X 2 between two independent, normally distributed variables is itself normally distributed.

17 17 Example 31 The total revenue from the sale of the three grades of gasoline on a particular day was Y = 3.0X 1 + 3.2X 2 + 3.4X 3, and we calculated   = 5620 and (assuming independence)   = 429.46. If the X i s are normally distributed, the probability that revenue exceeds 4500 is

18 18 The Case of Normal Random Variables The CLT can also be generalized so it applies to certain linear combinations. Roughly speaking, if n is large and no individual term is likely to contribute too much to the overall value, then Y has approximately a normal distribution.


Download ppt "Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples."

Similar presentations


Ads by Google