Presentation is loading. Please wait.

Presentation is loading. Please wait.

期待値と分散 E(aX+b) = aE(X) + b E(X+Y) = E(X) + E(Y)

Similar presentations


Presentation on theme: "期待値と分散 E(aX+b) = aE(X) + b E(X+Y) = E(X) + E(Y)"— Presentation transcript:

1 期待値と分散 E(aX+b) = aE(X) + b E(X+Y) = E(X) + E(Y)
E(XY) = E(X) E(Y)        X, Y: 独立 Var(X) = E(X2) – {E(X)} 2 Var(aX+b) = a2 Var(X) b f(X) f(X+b) Var(X+b) = Var(X) The expected value of a X + b is equal to a times the expected value X plus b. The expected value of X + Y is equal to sum of the expected value X and the expected value of Y. When random variables X and Y are independent, the expected value of the product of X and Y becomes the product of the expected value of X and the expected value of Y. The variance of X is the difference between the expected value of X squared and the square of the expected value of X. The variance of a X+ b becomes the product of a squared and the variance of X. I’ll show you a simple example that can help your comprehension of the last relation. At first let a be equal to 0. When we add b to random variable X, the new probability density function f(X+b) is sifted by b to the left side. Obviously, the shape of the two probability density functions are same, therefore the variance of them become same. It means that however big number you add to the random variable, the variance never changes. X0 X0+b f(aX) f(X) X0 μ+σ aX aμ+aσ

2 期待値と分散 C(X,Y) = E[[X-E(X)] [Y-E(Y)] ] example R: 相関係数
Var(X+Y) = V(X)+V(Y) +2C(X,Y) Var(X+Y) = V(X)+V(Y)          X, Y: 独立 Var(a1X1+a2X2+ ‥ ‥ +anXn)    = a12V(X1)+a22V(X2) + ‥ +an2 (Xn) +2a1a2C(X1,X2)+ 2a1a2C(X1,X2) + ‥+ 2a1anC(X1,Xn)+2a2a3C(X2,X3) + ‥+ 2an-1anC(Xn-1,Xn) = a12V(X1)+a22V(X2) + ‥+an2 (Xn) Xi, Xj: 独立 example

3 E(aX+b) = ∫ (aX+b) f(X) dX = ∫ aX f(X) dX + ∫b f(X) dX
= a E(X) +b E(X+Y) =∬(X+Y) f (X,Y) dYdX =∬X f (X,Y) dYdX+∬Y f (X,Y) dYdX =∫X f X(X) dX+∫ Y fY (Y) dY = E(X) +E(Y) E(XY) = ∬XY f (X,Y) dYdX = ∬XY f X(X) fY (Y) dYdX = ∫ X f X(X) dX ∫Y fY (Y) dY = E(X) E(Y)    X, Y: Independent Before solving this problem, we have to extend the expected value of one variable to that of two variables. F(X,Y) is called as joint probability distribution function that will be explained at 3.3 section.

4 X/Yの期待値 E(X/Y) = ∬X/Y f (X,Y) dYdX = ∬X/Y f X(X) fY (Y) dYdX
= ∫ X f X(X) dX ∫1/Y fY (Y) dY = E(X) ∫1/Y fY(Y) dY    = E(X) E(1/Y)   Before solving this problem, we have to extend the expected value of one variable to that of two variables. F(X,Y) is called as joint probability distribution function that will be explained at 3.3 section.

5 Var(X), Var(aX+b) Var(X) = E[{X-E(X)}2] = E [X 2 -2XE(X)+ { E(X) }2]
= E(X 2) –2E(XE(X))+E({ E(X) }2) = E(X 2) –2 E(X) E(X)+ { E(X) }2 = E(X 2) –{ E(X) }2 Var(aX+b) = E [{ aX+b - E(aX+b) }2] = E [{ aX+b - aE(X) -b) }2] = E [a2{ X- E(X) }2] = a2 Var(X)

6 Var(X+Y) =E[{X+Y-E(X+Y)}2] =E[{X+Y-E(X)-E(Y)}2] =E[{X- E(X) +Y-E(Y)}2]
=E[{X- E(X)}2 +{Y-E(Y)}2+2 {X- E(X)}{Y-E(Y)}] =Var(X) + Var(Y) +2C(X,Y) C(X,Y) =E[ {X- E(X)}{Y-E(Y)}] C(X,Y)=E[ {X- E(X)}{Y-E(Y)}] =E[ XY- E(X)Y-E(Y)X + E(X)E(Y)] =E[XY]- E(X)E(Y)- E(Y)E(X) + E(X)E(Y) =E[XY]- E(X)E(Y) =0 X, Y: 独立 Var(X+Y) = V(X)+V(Y)          X, Y: 独立 Var(a1X1+a2X2+ ‥ +anXn) =Var(a1X1)+ Var(a2X2)+ ‥ +Var(anXn ) = a12Var(X1)+a22Var(X2) + ‥ +an2Var (Xn) X, Y: 独立 Covariance.

7 例(平均値の期待値と分散) : 独立 演習問題 There are 20 balls and a box. The weight[g] of each ball follows N(100,4) and that of the empty box follows N(300,10).  What kind of distribution does the weight of the box with 20 balls follow? The weight of the box and each balls are independnt. Let Xi be random variables, and expected value of each random variable be μ, and variance be σ2. Let’s calculate the expected value and variance of X bar, the average of all random value. N(2300,420 1/2) =N(2300, 2√105)

8 E(X),Var(X) X is composed E(X) is

9 ST=SA+SE 総平方和(ST)=級間平方和(SA)+級内平方和(SE) 相関比(寄与率)

10 ST=SA+SE 証明

11 3.3. Multiple Random Variables
3.3.1 Joint and conditional probability function Joint Probability Mass Function (joint PMF) Joint Distribution Function In the former sections of chapter 3, we have learned characteristics of random variables and some useful probability distributions that has only one random variable X. In this section, we will learn characteristics of multiple random variables. The concept of a random variable and its probability distribution can be extended to two or more random variables. In order to identify numerical events in s sample space may be mapped into two or more dimensional of the real space; implicitly this requires two or more random variables. You have already known that there are two types of random variables, one is discrete variable and another is continuous variable. As discrete variable is easier to understand, I’ll explain the discrete case at first. The probability distribution function may be described with the joint probability mass function.fX,Y(x,y) is the joint probability mass function and is the probability that X is x and Y is y. Then the distribution function is also defined as the probability that X is less than or equal to x, and Y is less than or equal to y, which is simply the sum of probabilites associated with all points (xi,yi) in the subset {xi is less than or equal to x, yi is less than or equal to y} The joint probability function must satisfy the following conditions in order to comply wieth the axioms of proability. Joint distribution function of –infinity and –infinity is 0, and that of infinity and –infinity is 1. Joint distribution function of –infinity and y is 0, and that of infinity and y is Fy(y) Joint distribution function of x and –infinity is 0, and that of x and infinity is Fx(x). FX,Y(x,y) is non-negative, and a non-decreasing function of x and y. is nonnegative and a non-decreasing function

12 条件付き確率関数 Conditional Probability Mass Function
(3.59) (3.59a) 周辺確率関数 Marginal Probability Mass Function (3.60) (3.60a) Conditional Probability Mass Function that the conditional probability of X when Y=y is defined as the quotient(クオウシャント) of the joint probability mass function fX,Y(x,y)over the marginal probability function fx(x) that I will mention soon. Marginal Probability Mass Function fx(x) is defined as sum of the joint probability function of x and yi for all yi and fY(y) is defined as sum of the joint probability function of xi and y for all xi. If X and Y are statistically independent, conditional probability mass function If X and Y are statistically independent (3.61)

13 p p (X,Y) X1 X2 (Y) X,Y Y pX,Y(X,Y): Joint PMF Y1 0.1 0.2 0.3 Y2 0.3 0.4 0.7 p (X) 0.4 0.6 X pY(Y): 周辺確率関数 0.7 0.6 Y Y2 0.4 0.4 pX(X|Y1): 条件付き確率関数 0.3 0.3 0.2 X X2 Y2 0.1 Y1 X2 =0.1/ =0.2/0.3 X1

14 Example Ex. In a two phase production process, defects are attributed to either the first or second phase. Table below gives the number of times of x and y, where x and y are number of defects in the first and second phases, respectively. I’ll show you an example of two discrete random variables. In a two phase production process, defects are attributed to either the first or second phase. Table below gives the number of times of x and y, where x and y are number of defects in the first and second phases, respectively.

15 Joint Probability Mass Function and Joint Distribution Function
PMF and CDF Discrete 同時確率関数と同時分布関数 Joint Probability Mass Function and Joint Distribution Function f(x,y) 0.35 0.64 0.85 0.94 1.00 fX(x) FX(x) f Y(y) FY(y) x 1 2 3 0.14 0.14 0.04 0.03 0.35 1 0.13 0.12 0.02 0.02 0.29 y 2 0.1 0.08 0.02 0.01 0.21 3 0.04 0.03 0.01 0.01 0.09 4 0.03 0.03 0.06 0.44 0.4 0.09 0.07 1 0.78= Joint probability mass function is derived by dividing frequencies by total. In this case we get this table by dividing the frequencies on the former table by total 300. Joint Distribution Function is derived by summing up the probability of all the points that X is less than or equal to x and Y is less than or equal to y. In this case of the joint porbability of (1,3) is sum of the probabilities of all points that X is less than or equal to 1 and Y is less than or equal to 3.

16 周辺確率関数と条件付き確率関数 fX|Y(x|y) fY|X(y|x)

17 P(X<Y) P(X<Y)=0.13+0.10+0.04+0.03 +0.08+0.03+0.03+0.01=0.45
1 2 3 0.14 0.14 0.04 0.03 1 0.13 0.12 0.02 0.02 y 2 0.1 0.08 0.02 0.01 3 0.04 0.03 0.01 0.01 4 0.03 0.03 P(X<Y)= =0.45 P(X=Y) x 1 2 3 0.14 0.14 0.04 0.03 1 0.13 0.12 0.02 0.02 y 2 0.1 0.08 0.02 0.01 3 0.04 0.03 0.01 0.01 4 0.03 0.03 P(X=Y)= =0.29

18 共分散と相関係数 共分散 Covariance 相関係数 Correlation Coefficient

19 共分散と相関係数(2) 共分散と相関係数 X= 0*0.44+1*0.4+2*0.09+3*0.07=0.79
X2=0* *0.4+22* * = X=0.8752 Y= 0*0.35+1*0.29+2*0.21+3*0.09+4*0.06=1.22 Y2=0* * * * * = Y=1.1881 Cov(X,Y)=0*0*0.14+0*1*0.14+0*2*0.04+0*3*0.03 +1*0*0.13+1*1*0.12+1*2*0.02+1*3*0.02 +2*0*0.10+2*1*0.08+2*2*0.02+2*3*0.01 +3*0*0.03+3*1*0.03+3*2*0.00+3*3* *1.22= = /(0.8752*1.1881)=

20 同時連続変数 Continuous Joint Bivariate X, Y
同時確率密度関数 Joint Probability Density Function (joint PDF) (3.62) 同時分布関数 Joint Distribution Function (3.63) (3.64) (3.65)

21 Fig Joint PDF

22 Conditional PDF & Marginal DF
条件付き確率密度関数 Conditional Probability Density Function (3.66) If X and Y are statistically independent (3.68) 周辺確率密度関数 Marginal Probability Density Function (3.69) (3.70)

23 Example of PDF and CDF 0.9-1 0.8-0.9 1 0.7-0.8 0.9 0.6-0.7 0.8 0.5-0.6
0.25 1 0.9 0.2 0.8 0.7 0-0.05 0.15 0.6 0.5 0.1 0.4 0.3 0.05 3 0-0.1 2 3.0 1 0.2 2.0 0.1 1.0 -1 -3 0.0 -2 -2.2 -1.4 0.2 1 -1.0 -0.6 -3 1.8 2.6 -3.0 -2.2 -2.0 -1.4 -0.6 0.2 1.0 1.8 -3.0 2.6

24 条件付き確率密度関数 Conditional Probability Density Function
周辺確率密度関数 Marginal Probability Density Function 条件付き確率密度関数 Conditional Probability Density Function X と Y が統計的に独立な場合

25 Example of Joint P.D.F. Ex. Let X and Y represent the times to failure, in years, of subsystems A and B, respectively. Suppose X and Y posses the joint density

26 Fig. 3.14 Joint &Marginal PDF


Download ppt "期待値と分散 E(aX+b) = aE(X) + b E(X+Y) = E(X) + E(Y)"

Similar presentations


Ads by Google