Presentation is loading. Please wait.

Presentation is loading. Please wait.

RANDOM VARIABLES, EXPECTATIONS, VARIANCES ETC. - THEORY 1.

Similar presentations


Presentation on theme: "RANDOM VARIABLES, EXPECTATIONS, VARIANCES ETC. - THEORY 1."— Presentation transcript:

1 RANDOM VARIABLES, EXPECTATIONS, VARIANCES ETC. - THEORY 1

2 Variable Recall: Variable: A characteristic of population or sample that is of interest for us. Random variable: A function defined on the sample space S that associates a real number with each outcome in S. 2

3 DISCRETE RANDOM VARIABLES If the set of all possible values of a r.v. X is a countable set, then X is called discrete r.v. The function f(x)=P(X=x) for x=x 1,x 2, … that assigns the probability to each value x is called probability density function (p.d.f.) or probability mass function (p.m.f.) 3

4 Example Discrete Uniform distribution: Example: throw a fair die. P(X=1)=…=P(X=6)=1/6 4

5 CONTINUOUS RANDOM VARIABLES When sample space is uncountable (continuous) Example: Continuous Uniform(a,b) 5

6 CUMULATIVE DENSITY FUNCTION (C.D.F.) CDF of a r.v. X is defined as F(x)=P(X≤x). Note that, P(a<X ≤b)=F(b)-F(a). A function F(x) is a CDF for some r.v. X iff it satisfies 6 F(x) is continuous from right F(x) is non-decreasing.

7 Example Consider tossing three fair coins. Let X=number of heads observed. S={TTT, TTH, THT, HTT, THH, HTH, HHT, HHH} P(X=0)=P(X=3)=1/8; P(X=1)=P(X=2)=3/8 7 xF(x) (-∞,0)0 [0,1)1/8 [1,2)1/2 [2,3)7/8 [3, ∞)1

8 Example Let 8

9 JOINT DISTRIBUTIONS In many applications there are more than one random variables of interest, say X 1, X 2,…,X k. JOINT DISCRETE DISTRIBUTIONS The joint probability mass function (joint pmf) of the k -dimensional discrete rv X =( X 1, X 2,…,X k ) is 9

10 JOINT DISCRETE DISTRIBUTIONS A function f(x 1, x 2,…, x k ) is the joint pmf for some vector valued rv X =( X 1, X 2,…,X k ) iff the following properties are satisfied: f(x 1, x 2,…, x k )  0 for all (x 1, x 2,…, x k ) and 10

11 Example Tossing two fair dice  36 possible sample points Let X: sum of the two dice; Y: |difference of the two dice| For e.g.: – For (3,3), X=6 and Y=0. – For both (4,1) and (1,4), X=5, Y=3. 11

12 Example Joint pmf of (x,y) 12 x y 23456789101112 01/36 11/18 2 3 4 5 Empty cells are equal to 0. e.g. P(X=7,Y≤4)=f(7,0)+f(7,1)+f(7,2)+f(7,3)+f(7,4)=0+1/18+0+1/18+0=1/9

13 MARGINAL DISCRETE DISTRIBUTIONS If the pair (X 1,X 2 ) of discrete random variables has the joint pmf f(x 1,x 2 ), then the marginal pmfs of X 1 and X 2 are 13

14 Example In the previous example, – 14

15 JOINT DISCRETE DISTRIBUTIONS JOINT CDF: F(x 1,x 2 ) is a cdf iff 15

16 JOINT CONTINUOUS DISTRIBUTIONS A k -dimensional vector valued rv X =( X 1, X 2,…,X k ) is said to be continuous if there is a function f(x 1, x 2,…, x k ), called the joint probability density function (joint pdf), of X, such that the joint cdf can be given as 16

17 JOINT CONTINUOUS DISTRIBUTIONS A function f(x 1, x 2,…, x k ) is the joint pdf for some vector valued rv X =( X 1, X 2,…,X k ) iff the following properties are satisfied: f(x 1, x 2,…, x k )  0 for all (x 1, x 2,…, x k ) and 17

18 JOINT CONTINUOUS DISTRIBUTIONS If the pair (X 1,X 2 ) of discrete random variables has the joint pdf f(x 1,x 2 ), then the marginal pdfs of X 1 and X 2 are 18

19 JOINT DISTRIBUTIONS If X 1, X 2,…,X k are independent from each other, then the joint pdf can be given as And the joint cdf can be written as 19

20 CONDITIONAL DISTRIBUTIONS If X 1 and X 2 are discrete or continuous random variables with joint pdf f(x 1,x 2 ), then the conditional pdf of X 2 given X 1 =x 1 is defined by For independent rvs, 20

21 Example Statistical Analysis of Employment Discrimination Data (Example from Dudewicz & Mishra, 1988; data from Dawson, Hankey and Myers, 1982) 21 % promoted (number of employees) Pay gradeAffected classothers 5100 (6)84 (80) 788 (8)87 (195) 993 (29)88 (335) 107 (102)8 (695) 117 (15)11 (185) 1210 (10)7 (165) 130 (2)9 (81) 140 (1)7 (41) Affected class might be a minority group or e.g. women

22 Example, cont. Does this data indicate discrimination against the affected class in promotions in this company? Let X=(X 1,X 2,X 3 ) where X 1 is pay grade of an employee; X 2 is an indicator of whether the employee is in the affected class or not; X 3 is an indicator of whether the employee was promoted or not x1={5,7,9,10,11,12,13,14}; x2={0,1}; x3={0,1} 22

23 Example, cont. E.g., in pay grade 10 of this occupation (X 1 =10) there were 102 members of the affected class and 695 members of the other classes. Seven percent of the affected class in pay grade 10 had been promoted, that is (102)(0.07)=7 individuals out of 102 had been promoted. Out of 1950 employees, only 173 are in the affected class; this is not atypical in such studies. 23 Pay gradeAffected classothers 107 (102)8 (695)

24 Example, cont. E.g. probability of a randomly selected employee being in pay grade 10, being in the affected class, and promoted: P(X 1 =10,X 2 =1,X 3 =1)=7/1950=0.0036 (Probability function of a discrete 3 dimensional r.v.) E.g. probability of a randomly selected employee being in pay grade 10 and promoted: P(X 1 =10, X 3 =1)= (7+56)/1950=0.0323 (Note: 8% of 695 - > 56) (marginal probability function of X 1 and X 3 ) 24 Pay gradeAffected classothers 107 (102)8 (695)

25 Example, cont. E.g. probability that an employee is in the other class (X 2 =0) given that the employee is in pay grade 10 (X 1 =10) and was promoted (X 3 =1): P(X 2 =0| X 1 =10, X 3 =1)= P(X 1 =10,X 2 =0,X 3 =1)/P(X 1 =10, X 3 =1) =(56/1950)/(63/1950)=0.89 (conditional probability) probability that an employee is in the affected class (X 2 =1) given that the employee is in pay grade 10 (X 1 =10) and was promoted (X 3 =1): P(X 2 =1| X 1 =10, X 3 =1)=(7/1950)/(63/1950)=0.11 25

26 Production problem Two companies manufacture a certain type of sophisticated electronic equipment for the government; to avoid the lawsuits lets call them C and company D. In the past, company C has had 5% good output, whereas D had 50% good output (i.e., 95% of C’s output and 50% of D’s output is not of acceptable quality). The government has just ordered 10,100 of these devices from company D and 11,000 from C (maybe political reasons, maybe company D does not have a large enough capacity for more orders). Before the production of these devices start, government scientists develop a new manufacturing method that they believe will almost double the % of good devices received. Companies C and D are given this info, but its use is optional: they must each use this new method for at least 100 of their devices, but its use beyond that point is left to their discretion. 26

27 Production problem, cont. When the devices are received and tested, the following table is observed: Officials blame scientists and companies for producing with the lousy new method which is clearly inferior. Scientists still claim that the new method has almost doubled the % of good items. Which one is right? Production method StandardNew ResultsBad59509005 Good5050 (46%)1095 (11%) 27

28 Production problem, cont. Answer: the scientists rule! The new method nearly doubled the % of good items for both companies. Company D knew their production under standard method is already good, so they used the new item for only minimum allowed. This is called Simpson’s paradox. Do not combine the results for 2 companies in such cases. Company CD StandardNewStandardNew ResultsBad950900050005 Good50 (5%)1000 (10%)5000 (50%)95 (95%) 28

29 29 Describing the Population We’re interested in describing the population by computing various parameters. For instance, we calculate the population mean and population variance.

30 30 EXPECTED VALUES Let X be a rv with pdf f X (x) and g(X) be a function of X. Then, the expected value (or the mean or the mathematical expectation) of g(X) providing the sum or the integral exists, i.e.,  <E[g(X)]< .

31 31 EXPECTED VALUES E[g(X)] is finite if E[| g(X) |] is finite.

32 32 Population Mean (Expected Value) Given a discrete random variable X with values x i, that occur with probabilities p(x i ), the population mean of X is

33 33 – Let X be a discrete random variable with possible values x i that occur with probabilities p(x i ), and let E(x i ) =  The variance of X is defined by Population Variance Unit*Unit Unit

34 34 EXPECTED VALUE The expected value or mean value of a continuous random variable X with pdf f(x) is The variance of a continuous random variable X with pdf f(x) is

35 35 EXAMPLE The pmf for the number of defective items in a lot is as follows Find the expected number and the variance of defective items.

36 36 EXAMPLE Let X be a random variable. Its pdf is f(x)=2(1-x), 0< x < 1 Find E(X) and Var(X).

37 37 Laws of Expected Value Let X be a rv and a, b, and c be constants. Then, for any two functions g 1 (x) and g 2 (x) whose expectations exist,

38 38 Laws of Expected Value  E(c) = c  E( X + c) = E( X ) + c  E(c X ) = cE( X ) Laws of Variance  V(c) = 0  V( X + c) = V( X )  V(c X ) = c 2 V( X ) Laws of Expected Value and Variance Let X be a rv and c be a constant.

39 EXPECTED VALUE 39 If X and Y are independent, The covariance of X and Y is defined as

40 EXPECTED VALUE 40 If X and Y are independent, The reverse is usually not correct! It is only correct under normal distribution. If (X,Y)~Normal, then X and Y are independent iff Cov(X,Y)=0

41 EXPECTED VALUE 41 If X 1 and X 2 are independent,

42 CONDITIONAL EXPECTATION AND VARIANCE 42

43 CONDITIONAL EXPECTATION AND VARIANCE 43 (EVVE rule) Proofs available in Casella & Berger (1990), pgs. 154 & 158

44 Example - Advanced An insect lays a large number of eggs, each surviving with probability p. Consider a large number of mothers. X: number of survivors in a litter; Y: number of eggs laid Assume: Find: expected number of survivors, i.e. E(X) 44

45 Example - solution EX=E(E(X|Y)) =E(Yp) =p E(Y) =p E(E(Y|Λ)) =p E(Λ) =pβ 45

46 46 SOME MATHEMATICAL EXPECTATIONS Population Mean:  = E( X ) Population Variance: (measure of the deviation from the population mean) Population Standard Deviation: Moments:

47 47 SKEWNESS Measure of lack of symmetry in the pdf. If the distribution of X is symmetric around its mean ,  3 =0  Skewness=0

48 48 KURTOSIS Measure of the peakedness of the pdf. Describes the shape of the distribution. Kurtosis=3  Normal Kurtosis >3  Leptokurtic (peaked and fat tails) Kurtosis<3  Platykurtic (less peaked and thinner tails)

49 KURTOSIS What is the range of kurtosis? Claim: Kurtosis ≥ 1. Why? Proof: 49

50 Problems 1.True or false: The mean, median and mode of a normal distribution with mean µ and std deviation σ coincide. 50

51 Problems 2. True or false: In a symmetrical population, mean, median, and mode coincide. (Kendall & Stuart, 1969, p. 85) 51

52 Problems 3. True or False: “The mean, median and mode occur in the same order (or reverse order) as in the dictionary; and that the median is nearer to the mean than that to the mode, just as the corresponding words are nearer together in the dictionary. “ (Kendall & Stuart, 1969, p. 39) 52

53 Problems 4. If X, Y, Z and W are random variables, then find (show the derivations): a) Cov(X+Y,Z+W) b) Cov(X-Y,Z) 53

54 Problems 5. Calculate a) the skewness for. Comment. b) the kurtosis for the following pdf and comment: 54

55 Problems 5. c) Consider the discrete random variable X with pdf given below: i) Is the distribution of X symmetric around mean? ii) Show that the 3 rd central moment, and hence skewness, are 0. What does this imply? 55 x-302 f(x)1/4 1/8

56 Problem 6. Let X 1, X 2, X 3 be three independent r.v.s each with variance. Define new r.v.s W 1, W 2, W 3 by W 1 =X 1 ; W 2 =X 1 +X 2 ; W 3 =X 2 +X 3. Find Cor(W 1,W 2 ), Cor(W 2,W 3 ), Cor(W 1,W 3 ) 56


Download ppt "RANDOM VARIABLES, EXPECTATIONS, VARIANCES ETC. - THEORY 1."

Similar presentations


Ads by Google