1. (a) (b) The random variables X 1 and X 2 are independent, and each has p.m.f.f(x) = (x + 2) / 6 if x = –1, 0, 1. Find E(X 1 + X 2 ). E(X 1 ) = E(X 2.

Slides:



Advertisements
Similar presentations
Week 91 Example A device containing two key components fails when and only when both components fail. The lifetime, T 1 and T 2, of these components are.
Advertisements

Derivations of Student’s-T and the F Distributions
Recall Taylor’s Theorem from single variable calculus:
Section 2.1 Important definitions in the text: The definition of random variable and space of a random variable Definition The definition of probability.
Probability Theory STAT 312 STAT 312 Dr. Zakeia AlSaiary.
TRANSFORMATION OF FUNCTION OF A RANDOM VARIABLE UNIVARIATE TRANSFORMATIONS.
Section 5.1 Let X be a continuous type random variable with p.d.f. f(x) where f(x) > 0 for a < x < b, where a = – and/or b = + is possible; we also let.
Probability theory 2010 Order statistics  Distribution of order variables (and extremes)  Joint distribution of order variables (and extremes)
Section 5.4 is n  a i X i i = 1 n  M i (a i t). i = 1 M 1 (a 1 t) M 2 (a 2 t) … M n (a n t) = Y = a 1 X 1 + a 2 X 2 + … + a n X n = If X 1, X 2, …, X.
Probability Densities
Probability theory 2010 Main topics in the course on probability theory  Multivariate random variables  Conditional distributions  Transforms  Order.
Section 2.3 Suppose X is a discrete-type random variable with outcome space S and p.m.f f(x). The mean of X is The variance of X is The standard deviation.
Section 6.1 Let X 1, X 2, …, X n be a random sample from a distribution described by p.m.f./p.d.f. f(x ;  ) where the value of  is unknown; then  is.
1 Continuous Distributions ch3. 2   A random variable X of the continuous type has a support or space S that is an interval(possibly unbounded) or a.
Class notes for ISE 201 San Jose State University
TECHNIQUES OF INTEGRATION
Lecture 5 Probability and Statistics. Please Read Doug Martinson’s Chapter 3: ‘Statistics’ Available on Courseworks.
Probability theory 2011 Main topics in the course on probability theory  The concept of probability – Repetition of basic skills  Multivariate random.
Section 5.3 Suppose X 1, X 2, …, X n are independent random variables (which may be either of the discrete type or of the continuous type) each from the.
Section 8.3 Suppose X 1, X 2,..., X n are a random sample from a distribution defined by the p.d.f. f(x)for a < x < b and corresponding distribution function.
1 Multivariate Distributions ch4. 2 Multivariable Distributions  It may be favorable to take more than one measurement on a random experiment. –The data.
Sections 4.1, 4.2, 4.3 Important Definitions in the Text:
Section 3.3 If the space of a random variable X consists of discrete points, then X is said to be a random variable of the discrete type. If the space.
Tch-prob1 Chapter 4. Multiple Random Variables Ex Select a student’s name from an urn. S In some random experiments, a number of different quantities.
Section 5.5 Important Theorems in the Text: Let X 1, X 2, …, X n be independent random variables with respective N(  1,  1 2 ), N(  2,  2 2 ), …, N(
Chapter 4: Joint and Conditional Distributions
1 Sampling Distribution Theory ch6. 2  Two independent R.V.s have the joint p.m.f. = the product of individual p.m.f.s.  Ex6.1-1: X1is the number of.
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
Sampling Distributions  A statistic is random in value … it changes from sample to sample.  The probability distribution of a statistic is called a sampling.
3 DIFFERENTIATION RULES.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Section 3.5 Let X have a gamma( ,  ) with  = r/2, where r is a positive integer, and  = 2. We say that X has a chi-square distribution with r degrees.
4.2 Variances of random variables. A useful further characteristic to consider is the degree of dispersion in the distribution, i.e. the spread of the.
Section 3.6 Recall that y –1/2 e –y dy =   0 (Multivariable Calculus is required to prove this!)  (1/2) = Perform the following change of variables.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Functions of Two Random.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Use of moment generating functions 1.Using the moment generating functions of X, Y, Z, …determine the moment generating function of W = h(X, Y, Z, …).
Section 3.7 Suppose the number of occurrences in a “unit” interval follows a Poisson distribution with mean. Recall that for w > 0, P(interval length to.
ENGG 2040C: Probability Models and Applications Andrej Bogdanov Spring Jointly Distributed Random Variables.
1 Two Functions of Two Random Variables In the spirit of the previous lecture, let us look at an immediate generalization: Suppose X and Y are two random.
Techniques of Integration
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Chapter 5a:Functions of Random Variables Yang Zhenlin.
Math 4030 – 6a Joint Distributions (Discrete)
Chapter 3 DeGroot & Schervish. Functions of a Random Variable the distribution of some function of X suppose X is the rate at which customers are served.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Section 3.7 Suppose the number of occurrences in a “unit” interval follows a Poisson distribution with mean. Recall that for w > 0, P(interval length to.
TRANSFORMATION OF FUNCTION OF A RANDOM VARIABLE
Distributions of Functions of Random Variables November 18, 2015
The derivative rules for multivariable functions stated Theorem 10 on page 151 are analogous to derivative rules from single variable calculus. Example.
Section 10.5 Let X be any random variable with (finite) mean  and (finite) variance  2. We shall assume X is a continuous type random variable with p.d.f.
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Function of a random variable Let X be a random variable in a probabilistic space with a probability distribution F(x) Sometimes we may be interested in.
Indefinite Integrals -1.  Primitive or Antiderivative  Indefinite Integral  Standard Elementary Integrals  Fundamental Rules of Integration  Methods.
CHAPTER 9.10~9.17 Vector Calculus.
Statistics Lecture 19.
Main topics in the course on probability theory
Example A device containing two key components fails when and only when both components fail. The lifetime, T1 and T2, of these components are independent.
FIRST ORDER DIFFERENTIAL EQUATIONS
DIFFERENTIATION & INTEGRATION
Lectures prepared by: Elchanan Mossel Yelena Shvets
How accurately can you (1) predict Y from X, and (2) predict X from Y?
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005
3.0 Functions of One Random Variable
Integration To integrate ex
ASV Chapters 1 - Sample Spaces and Probabilities
7. Continuous Random Variables II
9. Two Functions of Two Random Variables
Techniques of Integration
Presentation transcript:

1. (a) (b) The random variables X 1 and X 2 are independent, and each has p.m.f.f(x) = (x + 2) / 6 if x = –1, 0, 1. Find E(X 1 + X 2 ). E(X 1 ) = E(X 2 ) = (–1) — + (0) — + (1) — = — 3 E(X 1 ) + E(X 2 ) = — + — = — Find the p.m.f. of Y = X 1 + X 2, and use this p.m.f. to find E(Y). The space of Y is{–2, –1, 0, 1, 2} P(Y = –2) = P(X 1 = – 1  X 2 = – 1) = P(X 1 = – 1) P(X 2 = – 1) =(1/6)(1/6) = 1/36

P(Y = –1) = P({X 1 = – 1  X 2 = 0}  {X 1 = 0  X 2 = – 1}) = P(X 1 = – 1  X 2 = 0) + P(X 1 = 0  X 2 = – 1) = P(X 1 = – 1) P(X 2 = 0) + P(X 1 = 0) P(X 2 = – 1) = (1/6)(2/6) + (2/6)(1/6) = 4/36 = 1/9 P(Y = 0) = P({X 1 = – 1  X 2 = 1}  {X 1 = 1  X 2 = – 1}  {X 1 = 0  X 2 = 0}) = P(X 1 = – 1  X 2 = 1) + P(X 1 = 1  X 2 = – 1) + P(X 1 = 0  X 2 = 0) = P(X 1 = – 1) P(X 2 = 1) + P(X 1 = 1) P(X 2 = – 1) + P(X 1 = 0) P(X 2 = 0) = (1/6)(3/6) + (3/6)(1/6) + (2/6)(2/6) = 10/36 = 5/18 P(Y = 1) =P(Y = 2) =12/36 = 1/39/36 = 1/4

The p.m.f. of Y is g(y) = 1/36if y = – 2 4/36 = 1/9if y = – 1 10/36 = 5/18if y = 0 12/36 = 1/3if y = 1 9/36 = 1/4if y = 2 E(Y) =2/3(as expected from part (a)) (c)What type of distribution does W = X 1 2 have? The space of W is{0, 1} P(W = 0) = P(W = 1) = P(X 1 = 0) =1/3 P(X 1 = – 1  X 1 = 1) = 2/3 W has aBernoulli distribution with p = 2/3.

2.Suppose that the random variable X has a b(n 1, p) distribution, that the random variable Y has a b(n 2, p) distribution, and that the random variables X and Y are independent. What type of distribution does the random variable V = X + Y have? V has a b(n 1 + n 2, p) distribution. 3. (a) The random variables X 1 and X 2 are independent and respectively have p.d.f. f 1 (x) = 4x 1 3 if 0 < x 1 < 1andf 2 (x) = 2x 2 if 0 < x 2 < 1. Find the joint p.d.f. of (X 1, X 2 ). Since X 1 and X 2 are independent, their joint p.d.f. is f(x 1, x 2 ) = 8 x 1 3 x 2 if 0 < x 1 < 1, 0 < x 2 < 1

Section 5.2 Suppose X 1 and X 2 are continuous type random variables with joint p.d.f. f(x 1, x 2 ) and with space S  R 2. Suppose Y = u(X 1, X 2 ) where u is a function. A method for finding the p.d.f. of the random variable Y is the distribution function method. To use this method, one first attempts to find a formula for the distribution function of Y, that is, a formula for G(y) = Then, the p.d.f. of Y is obtained by taking the derivative of the distribution function, that is, the p.d.f. of Y is g(y) = P(Y  y). G / (y). This method will generally involve working with a double integral. Suppose Y 1 = u 1 (X 1, X 2 ) and Y 2 = u 2 (X 1, X 2 ) where u 1 and u 2 are functions. A method for finding the joint p.d.f. of the random variables (Y 1, Y 2 ) is the change-of-variables method. This method can be used when the two functions define a one-to-one mapping between S  R 2 and T  R 2 as follows: y 1 = u 1 (x 1, x 2 ) y 2 = u 2 (x 1, x 2 )  x 1 = v 1 (y 1, y 2 ) x 2 = v 2 (y 1, y 2 )

Then the space of (Y 1, Y 2 ) is T  R 2, and the joint p.d.f. of (Y 1, Y 2 ) is g(y 1, y 2 ) = f[ v 1 (y 1, y 2 ), v 2 (y 1, y 2 ) ] | J | where J is defined to be the determinant of a Jacobian matrix as follows: J = det Each of the distribution function method and the change-of-variables method can be extended in a natural way to a situation where (X 1, X 2 ) is replaced with (X 1, X 2, …, X n ) for n > 2. v1v1——y1y2v2v2——y1y2v1v1——y1y2v2v2——y1y2—— Return to Class Exercise #3. J is called the Jacobian determinant.

(b)Define the random variables Y 1 = X 1 2 and Y 2 = X 1 X 2. Use the change-of-variables method to find the joint p.d.f. of (Y 1, Y 2 ). First, we find the space of Y 1 = X 1 2 and Y 2 = X 1 X 2 as follows: x1x1 x2x2 (0,0) (0,1)(1,1) (1,0) y1y1 y2y2 (0,0) (1,1) (1,0) < y 1 <, < y 2 <010 y1y1 < y 2 <, < y 1 <01y22y22 1 or (Look at where the boundaries are mapped) y 1 = x 1 2 x 1 =  y 2 = x 1 x 2 x 2 = y1y1 y 2 ——  y 1 Then, we find the inverse transformation as follows:

3. - continued Next, we find the Jacobian determinant as follows: The joint p.d.f. of Y 1 and Y 2 is J = det x1x1——y1y2x2x2——y1y2x1x1——y1y2x2x2——y1y2—— = 1 —– 2  y 1 det 0 – y 2 —– 2  y —–  y 1 = 1 — 2y 1 g(y 1, y 2 ) =8( ) 3 ( )= y1y1 y 2 ——  y 1 1 — 2y 1 if 0 < y 1 < 1, 0 < y 2 <  y 1 4y 2 ( or 0 < y 2 < 1, y 2 2 < y 1 < 1 )

(c)Find the marginal p.d.f. for Y 1 and the marginal p.d.f. for Y 2. g 1 (y 1 ) = 0 y1y1 4y 2 dy 2 = y 2 = 0 y1y1 2y 2 2 = 2y 1 if 0 < y 1 < 1 g 2 (y 2 ) = y22y22 1 4y 2 dy 1 = y 1 = y y 2 y 1 =4y 2 – 4y 2 3 if 0 < y 2 < 1 To find g 2 (y 2 ), we first observe that the space 0 < y 1 < 1, 0 < y 2 <  y 1 can be described as 0 < y 2 < 1, y 2 2 < y 1 < 1.

4. (a) (b) The random variables X 1 and X 2 have joint p.d.f. f(x 1, x 2 ) = 8x 1 x 2 if 0 < x 1 < x 2 < 1. Are X 1 and X 2 independent random variables? Why or why not? Since the space of (X 1, X 2 ) is not rectangular, X 1 and X 2 cannot possibly be independent. Define the random variables Y 1 = X 1 / X 2 and Y 2 = X 2 – X 1. Use the change-of-variables method to find the joint p.d.f. of (Y 1, Y 2 ). First, we find the space of Y 1 = X 1 / X 2 and Y 2 = X 2 – X 1 as follows: x1x1 x2x2 (0,0) (0,1)(1,1) y1y1 y2y2 (0,0) (0,1) (1,0) < y 1 <, < y 2 <0101 – y 1 < y 2 <, < y 1 <0101 – y 2 (Look at where the boundaries are mapped) or

x 1 y 1 = —x 1 = x 2  y 2 = x 2 – x 1 x 2 = y 1 y 2 —— 1 – y 1 y 2 —— 1 – y 1 Then, we find the inverse transformation as follows: Next, we find the Jacobian determinant as follows: J = det x1x1——y1y2x2x2——y1y2x1x1——y1y2x2x2——y1y2—— = y 2 ——— (1 – y 1 ) 2 det y 1 —— 1 – y 1 y 2 ——— (1 – y 1 ) 2 1 —— 1 – y 1 = y 2 ——— (1 – y 1 ) 2

8( )( )= y 1 y 2 —— 1 – y 1 y 2 —— 1 – y 1 y 2 ——— (1 – y 1 ) 2 if 0 < y 1 < 1, 0 < y 2 < 1 – y 1 8 y 1 y 2 3 ——— (1 – y 1 ) 4 ( or 0 < y 2 < 1, 0 < y 1 < 1 – y 2 ) 4. - continued The joint p.d.f. of Y 1 and Y 2 isg(y 1, y 2 ) = (c)Find the marginal p.d.f. for Y 1 and the marginal p.d.f. for Y 2. g 1 (y 1 ) = 0 1 – y 1 8 y 1 y 2 3 ——— dy 2 = (1 – y 1 ) 4 y 2 = 0 1 – y 1 2 y 1 y 2 4 ——— = (1 – y 1 ) 4 2y 1 if 0 < y 1 < 1

g 2 (y 2 ) = 0 1 – y 2 8 y 1 y 2 3 ——— dy 1 = (1 – y 1 ) y 2 3 ——— –——— dy 1 = (1 – y 1 ) 4 (1 – y 1 ) 3 To find g 2 (y 2 ), we first observe that the space 0 < y 1 < 1, 0 < y 2 < 1 – y 1 can be described as 0 < y 2 < 1, 0 < y 1 < 1 – y – y 2 y 1 = 0 1 – y 2 8 – 12y 2 + 4y 2 3 ——————if 0 < y 2 < 1 3 3y 1 – 1 8 y 2 3 ———— = 6(1 – y 1 ) 3 Study Example in the textbook, and compare this with the distribution function method for this same situation, suggested in Text Exercise

The random variables X 1 and X 2 are independent and each has a gamma(1,2) (or  2 (2) or exponential(2)) distribution. 6. (a) (b) Find the joint p.d.f. of (X 1, X 2 ). Since X 1 and X 2 are independent, their joint p.d.f. is 1 x 1 + x 2 f(x 1, x 2 ) = — exp – ——— if 0 < x 1, 0 < x Define the random variable Y = X 1 + X 2. Use the distribution function method to find the p.d.f. of Y. The space of Y = X 1 + X 2 is{y | 0 < y} The distribution function of Y = X 1 + X 2 is G(y) = P(Y  y) = P(X 1 + X 2  y) =P(X 1  y – X 2 ) =P(0 < X 1  y – X 2, 0 < X 2 < y) =

0 y 0 y – x 2 1 x 1 + x 2 — exp – ——— dx 1 dx 2 = y 0 y – x 2 1 x 2 — exp – — dx 2 = y 1 x 2 — exp – — dx 2 = 2 2 x 1 = 0 y – x 2 x 1 – exp – — 2 1 x 1 — exp – — dx y 1 x 2 — exp – —dx 2 = 2 2 y – x 2 1 – exp – ——– 2

0 y 1 x 2 — exp – — dx 2 = 2 2 y – exp – — 2 x 2 = 0 y x 2 – exp – — = 2 x 2 y – — exp – — 2 2 y 1 – exp – — 2 y – — exp – — 2 2 The p.d.f. of Y is g(y) = G / (y) = y — exp – — if 0 < y 4 2 We recognize that Y has a distribution. gamma(2,2) (or  2 (4))

The random variables U and V are independent and have respectively a  2 (r 1 ) distribution and a  2 (r 2 ) distribution. 7. (a) (b) Find the joint p.d.f. of (U, V). Define the random variable W =. Use the distribution function method to find the p.d.f. of W by completing the steps outlined, after first reviewing the following two facts from calculus: (1) Since U and V are independent, their joint p.d.f. is u v u + v f(u,v) = ————————— exp – ——— if 0 < u, 0 < v  (r 1 /2)  (r 2 /2) 2 2 U / r 1 —— V / r 2 d — dy (r 1 +r 2 )/2 r 1 /2 – 1r 2 /2 – 1 a b g(x,y) dx = —y —y a b g(x,y) dx

Example: d — dy –1 3 (x 3 y + xy 2 ) dx = —y —y (x 3 y + xy 2 ) dx –1 3 x 4 y x 2 y 2 — + —— 4 2 x = –1 3 d — dy –1 3 (x 3 + 2xy) dx (20y + 4y 2 ) d — dy y x 4 — + x 2 y 4 x = – y

(2) Example: 7. - continued From the chain rule and the Fundamental Theorem of Calculus, we have that d — dt a h(t)h(t) g(x) dx = d — dt g(h(t)) h(t). d — dt –1 t + 3  t x 3 dx = (t + 3  t) 3 (1 + 3/(2  t)) x 4 —= 4 x = –1 d — dt t + 3  t (t + 3  t) 4 1 ———— – — = 4 4 d — dt (t + 3  t) 3 (1 + 3/(2  t))

The space of W = is{w | 0 < w} The distribution function of W = is G(w) = P(W  w) = U / r 1 —— V / r 2 U / r 1 —— V / r 2 P U / r 1 ——  w = V / r 2 P U  = r 1 wV —— r 2 0  0 1 ——————  (r 1 /2)  (r 2 /2) 2 u eu e du r 1 wv —— r 2 (r 1 +r 2 )/2 r 1 /2 – 1 – u/2 v edv r 2 /2 – 1 – v/2

The p.d.f. of W is g(w) = G / (w) = 7. - continued 0  1 ——————  (r 1 /2)  (r 2 /2) v e dv = r 2 /2 – 1 – v/2 2 (r 1 +r 2 )/2 r 1 /2 – 1 r 1 wv —— r 2 exp r 1 wv – —— 2r 2 r 1 v — r 2 (To simplify this p.d.f., we could either (1) make an appropriate change of variables in the integral, as is done in Example of the textbook, or (2) do some algebra to make the formula under the integral a p.d.f. which we know must integrate to (one) 1, as we shall do here.) 0  ——————  (r 1 /2)  (r 2 /2) r 1 /2 r1— r2 r1— r2 2 (r 1 +r 2 )/2 v (r 1 +r 2 )/2 – 1 w r 1 /2 – 1 exp r 1 wv + r 2 v – ———— 2r 2 dv =

0  ——————  (r 1 /2)  (r 2 /2) r 1 /2 r1— r2 r1— r2 2 (r 1 +r 2 )/2 v (r 1 +r 2 )/2 – 1 w r 1 /2 – 1 exp r 1 wv + r 2 v – ———— 2r 2 dv = 0  ———————————  (r 1 /2)  (r 2 /2) r 1 /2 r1— r2 r1— r2 w r 1 /2 – 1  [(r 1 + r 2 )/2] 2 (r 1 +r 2 )/2 dv = v (r 1 +r 2 )/2 – 1 exp  [(r 1 + r 2 )/2] v – —————— 2 / (1 + r 1 w/r 2 ) —————————————  (r 1 /2)  (r 2 /2) r 1 /2 r1— r2 r1— r2 w r 1 /2 – 1  [(r 1 + r 2 )/2] (1 + r 1 w/r 2 ) (r 1 +r 2 )/2 0  [2 / (1 + r 1 w/r 2 )] (r 1 +r 2 )/2 dv v (r 1 +r 2 )/2 – 1 exp  [(r 1 + r 2 )/2] v – —————— 2 / (1 + r 1 w/r 2 ) This is the p.d.f. for a random variable having a (r 1 + r 2 )/2 2 / (1 + r 1 w/r 2 )gamma(,) distribution.

—————————————  (r 1 /2)  (r 2 /2) r 1 /2 r1— r2 r1— r2 w r 1 /2 – 1  [(r 1 + r 2 )/2] (1 + r 1 w/r 2 ) (r 1 +r 2 )/2 if 0 < w This is the p.d.f. for a random variable having a Fisher’s f distribution with r 1 numerator degrees of freedom and r 2 denominator degrees of freedom. This distribution is important in some future applications of the theory of statistics. Since the integral must be equal to, then we now have thatone (1) the p.d.f. of W is g(w) =

Suppose the random variable F has an f distribution with r 1 numerator degrees of freedom and r 2 denominator degrees of freedom. 8. (a) (b) (c) (d) (e) (f) If r 1 = 5 and r 2 = 10, then P(F < 3.33) =0.95 If r 1 = 5 and r 2 = 10, then P(F > 5.64) =0.01 f (4, 8) =5.05 f (4, 8) =1/f (8, 4) = f (8, 4) =8.98 f (8, 4) =1/f (4, 8) = 1/8.98 = /5.05 = 0.198