Presentation is loading. Please wait.

Presentation is loading. Please wait.

1A.1 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Review Some Basic Statistical Concepts Appendix 1A.

Similar presentations


Presentation on theme: "1A.1 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Review Some Basic Statistical Concepts Appendix 1A."— Presentation transcript:

1 1A.1 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Review Some Basic Statistical Concepts Appendix 1A

2 1A.2 Copyright© 1977 John Wiley & Son, Inc. All rights reserved random variable : A variable whose value is unknown until it is observed. The value of a random variable results from an experiment. The term random variable implies the existence of some known or unknown probability distribution defined over the set of all possible values of that variable. In contrast, an arbitrary variable does not have a probability distribution associated with its values. Random Variable

3 1A.3 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Controlled experiment values of explanatory variables are chosen with great care in accordance with an appropriate experimental design. Uncontrolled experiment values of explanatory variables consist of nonexperimental observations over which the analyst has no control.

4 1A.4 Copyright© 1977 John Wiley & Son, Inc. All rights reserved discrete random variable : A discrete random variable can take only a finite number of values, that can be counted by using the positive integers. Example: Prize money from the following lottery is a discrete random variable: first prize: $1,000 second prize: $50 third prize: $5.75 since it has only four (a finite number) (count: 1,2,3,4) of possible outcomes: $0.00; $5.75; $50.00; $1,000.00 Discrete Random Variable

5 1A.5 Copyright© 1977 John Wiley & Son, Inc. All rights reserved continuous random variable : A continuous random variable can take any real value (not just whole numbers) in at least one interval on the real line. Examples: Gross national product (GNP) money supply interest rates price of eggs household income expenditure on clothing Continuous Random Variable

6 1A.6 Copyright© 1977 John Wiley & Son, Inc. All rights reserved A discrete random variable that is restricted to two possible values (usually 0 and 1) is called a dummy variable (also, binary or indicator variable). Dummy variables account for qualitative differences: gender (0=male, 1=female), race (0=white, 1=nonwhite), citizenship (0=U.S., 1=not U.S.), income class (0=poor, 1=rich). Dummy Variable

7 1A.7 Copyright© 1977 John Wiley & Son, Inc. All rights reserved A list of all of the possible values taken by a discrete random variable along with their chances of occurring is called a probability function or probability density function (pdf). diexf(x) one dot11/6 two dots21/6 three dots31/6 four dots41/6 five dots51/6 six dots61/6

8 1A.8 Copyright© 1977 John Wiley & Son, Inc. All rights reserved A discrete random variable X has pdf, f(x), which is the probability that X takes on the value x. f(x) = P(X=x i ) 0 < f(x) < 1 If X takes on the n values: x 1, x 2,..., x n, then f(x 1 ) + f(x 2 )+...+f(x n ) = 1. For example in a throw of one dice: (1/6) + (1/6) + (1/6) + (1/6) + (1/6) + (1/6)=1 Therefore,

9 1A.9 Copyright© 1977 John Wiley & Son, Inc. All rights reserved In a throw of two dice, the discrete random variable X, x = 2 3 4 5 6 7 8 9 10 11 12 f(x) =(1/36)(2/36)(3/36)(4/36)(5/36)(6/36)(5/36)( 4/36)(3/36)(2/36)(1/36) the pdf f(x) can be shown by the presented by height: 0 2 3 4 5 6 7 8 9 10 11 12 X number, X, the possible outcomes of two dice f(x)

10 1A.10 Copyright© 1977 John Wiley & Son, Inc. All rights reserved A continuous random variable uses area under a curve rather than the height, f(x), to represent probability: f(x) X $34,000 $55,000.. per capita income, X, in the United States 0.1324 0.8676 red area green area

11 1A.11 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Since a continuous random variable has an uncountably infinite number of values, the probability of one occurring is zero. P [ X = a ] = P [ a < X < a ] = 0 Probability is represented by area. Height alone has no area. An interval for X is needed to get an area under the curve.

12 1A.12 Copyright© 1977 John Wiley & Son, Inc. All rights reserved P [ a < X < b ] =   f(x) dx b a The area under a curve is the integral of the equation that generates the curve: For continuous random variables it is the integral of f(x), and not f(x) itself, which defines the area and, therefore, the probability.

13 1A.13 Copyright© 1977 John Wiley & Son, Inc. All rights reserved n Rule 2:   k x i = k   x i i = 1 n Rule 1:   x i = x 1 + x 2 +... + x n i = 1 n Rule 3:   x i +  y i  =   x i +   y i i = 1 nn n Note that summation is a linear operator which means it operates term by term. Rules of Summation

14 1A.14 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Rule 4:   ax i +  by i  = a   x i + b   y i i = 1 nn n Rules of Summation (continued) Rule 5: x =   x i = i = 1 n n 1 x 1 + x 2 +... + x n n The definition of x as given in Rule 5 implies the following important fact:   x i  x) = 0 i = 1 n

15 1A.15 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Rule 6:   f(x i ) = f(x 1 ) + f(x 2 ) +... + f(x n ) i = 1 n Notation:   f(x i ) =   f(x i ) =   f(x i ) n x i i = 1 n Rule 7:    f(x i,y j ) =  [ f(x i,y 1 ) + f(x i,y 2 )+...+ f(x i,y m )] i = 1 n m j = 1 The order of summation does not matter :    f(x i,y j ) =    f(x i,y j ) i = 1 n m j = 1 m n i = 1 Rules of Summation (continued)

16 1A.16 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The mean or arithmetic average of a random variable is its mathematical expectation or expected value, E(X). The Mean of a Random Variable

17 1A.17 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Expected Value There are two entirely different, but mathematically equivalent, ways of determining the expected value: 1. Empirically: The expected value of a random variable, X, is the average value of the random variable in an infinite number of repetitions of the experiment. In other words, draw an infinite number of samples, and average the values of X that you get.

18 1A.18 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Expected Value 2. Analytically: The expected value of a discrete random variable, X, is determined by weighting all the possible values of X by the corresponding probability density function values, f(x), and summing them up. E(X) = x 1 f(x 1 ) + x 2 f(x 2 ) +... + x n f(x n ) In other words:

19 1A.19 Copyright© 1977 John Wiley & Son, Inc. All rights reserved In the empirical case when the sample goes to infinity the values of X occur with a frequency equal to the corresponding f(x) in the analytical expression. As sample size goes to infinity, the empirical and analytical methods will produce the same value. Empirical vs. Analytical

20 1A.20 Copyright© 1977 John Wiley & Son, Inc. All rights reserved x =   x i /n n i = 1 where n is the number of sample observations. Empirical (sample) mean: E(X) =   x i  f(x i ) i = 1 n where n is the number of possible values of x i. Analytical mean: Notice how the meaning of n changes.

21 1A.21 Copyright© 1977 John Wiley & Son, Inc. All rights reserved E (X) =  x i f(x i ) i=1 n The expected value of X-squared: E (X ) =  x i f(x i ) i=1 n 2 2 It is important to notice that f(x i ) does not change! The expected value of X-cubed: E (X )=  x i f(x i ) i=1 n 3 3 The expected value of X:

22 1A.22 Copyright© 1977 John Wiley & Son, Inc. All rights reserved E(X) = 0 (.1) + 1 (.3) + 2 (.3) + 3 (.2) + 4 (.1) 2 22222 = 1.9 = 0 +.3 + 1.2 + 1.8 + 1.6 = 4.9 3 E( X ) = 0 (.1) + 1 (.3) + 2 (.3) + 3 (.2) +4 (.1) 3 3 3 3 3 = 0 +.3 + 2.4 + 5.4 + 6.4 = 14.5

23 1A.23 Copyright© 1977 John Wiley & Son, Inc. All rights reserved E [g(X)] =  g ( x i ) f(x i ) n i = 1 g(X) = g 1 (X) + g 2 (X) E [g(X)] =  g 1 (x i ) + g 2 (x i )] f(x i ) n i = 1 E [g(X)] =  g 1 (x i ) f(x i ) +  g 2 (x i ) f(x i ) n i = 1 n E [g(X)] = E [g 1 (X)] + E [g 2 (X)]

24 1A.24 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Adding and Subtracting Random Variables E(X-Y) = E(X) - E(Y) E(X+Y) = E(X) + E(Y)

25 1A.25 Copyright© 1977 John Wiley & Son, Inc. All rights reserved E(X+a) = E(X) + a Adding a constant to a variable will add a constant to its expected value: Multiplying by constant will multiply its expected value by that constant: E(bX) = b E(X)

26 1A.26 Copyright© 1977 John Wiley & Son, Inc. All rights reserved var(X) = average squared deviations around the mean of X. var(X) = expected value of the squared deviations around the expected value of X. var(X) = E [(X - E(X)) ] 2 Variance

27 1A.27 Copyright© 1977 John Wiley & Son, Inc. All rights reserved var(X) = E [(X - EX) ] = E [X - 2XEX + (EX) ] 2 2 2 = E(X ) - 2 EX EX + E (EX) 2 2 = E(X ) - 2 (EX) + (EX) 2 22 = E(X ) - (EX) 2 2 var(X) = E [(X - EX) ] 2 var(X) = E(X ) - (EX) 2 2

28 1A.28 Copyright© 1977 John Wiley & Son, Inc. All rights reserved variance of a discrete random variable, X: standard deviation is square root of variance var ( X ) = (xixi - EX) 2 f (xixi ) i= 1 n 

29 1A.29 Copyright© 1977 John Wiley & Son, Inc. All rights reserved x i f(x i ) (x i - EX) (x i - EX) f(x i ) 2.12 - 4.3 = -2.35.29 (.1) =.529 3.33 - 4.3 = -1.31.69 (.3) =.507 4.14 - 4.3 = -.3.09 (.1) =.009 5.25 - 4.3 =.7.49 (.2) =.098 6.36 - 4.3 = 1.72.89 (.3) =.867  x i f(x i ) =.2 +.9 +.4 + 1.0 + 1.8 = 4.3  (x i - EX) f(x i ) =.529 +.507 +.009 +.098 +.867 = 2.01 2 2 calculate the variance for a discrete random variable, X: i = 1 n n

30 1A.30 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Z = a + cX var(Z) = var(a + cX) = E [(a+cX) - E(a+cX)] = c var(X) 2 2 var(a + cX) = c var(X) 2

31 1A.31 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The covariance between two random variables, X and Y, measures the linear association between them. cov(X,Y) = E[(X - EX)(Y-EY)] Note that variance is a special case of covariance. cov(X,X) = var(X) = E[(X - EX) ] 2 Covariance

32 1A.32 Copyright© 1977 John Wiley & Son, Inc. All rights reserved cov(X,Y) = E [(X - EX)(Y-EY)] = E [XY - X EY - Y EX + EX EY] = E(XY) - 2 EX EY + EX EY = E(XY) - EX EY cov(X,Y) = E [(X - EX)(Y-EY)] cov(X,Y) = E(XY) - EX EY = E(XY) - EX EY - EY EX + EX EY

33 1A.33 Copyright© 1977 John Wiley & Son, Inc. All rights reserved.15.05.45.35 Y = 1 Y = 2 X = 0 X = 1.60.40.50 EX=0(.60)+1(.40)=.40 EY=1(.50)+2(.50)=1.50 E(XY) = (0)(1)(.45)+(0)(2)(.15)+(1)(1)(.05)+(1)(2)(.35)=.75 EX EY = (.40)(1.50) =.60 cov(X,Y) = E(XY) - EX EY =.75 - (.40)(1.50) =.75 -.60 =.15 covariance

34 1A.34 Copyright© 1977 John Wiley & Son, Inc. All rights reserved A joint probability density function, f(x,y), provides the probabilities associated with the joint occurrence of all of the possible pairs of X and Y. Joint pdf

35 1A.35 Copyright© 1977 John Wiley & Son, Inc. All rights reserved college grads in household.15.05.45.35 joint pdf f(x,y) Y = 1 Y = 2 vacation homes owned X = 0 X = 1 Survey of College City, NY f (0,1) f (0,2) f (1,1) f (1,2)

36 1A.36 Copyright© 1977 John Wiley & Son, Inc. All rights reserved E[g(X,Y)] =   g(x i,y j ) f(x i,y j ) i j E(XY) = (0)(1)(.45)+(0)(2)(.15)+(1)(1)(.05)+(1)(2)(.35)=.75 E(XY) =   x i y j f(x i,y j ) i j Calculating the expected value of functions of two random variables.

37 1A.37 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The marginal probability density functions, f(x) and f(y), for discrete random variables, can be obtained by summing over the f(x,y) with respect to the values of Y to obtain f(x) with respect to the values of X to obtain f(y). f(x i ) =  f(x i,y j ) f(y j ) =  f(x i,y j ) i j Marginal pdf

38 1A.38 Copyright© 1977 John Wiley & Son, Inc. All rights reserved.15.05.45.35 marginal Y = 1 Y = 2 X = 0 X = 1.60.40.50 f (X = 1) f (X = 0) f (Y = 1) f (Y = 2) marginal pdf for Y: marginal pdf for X:

39 1A.39 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The conditional probability density functions of X given Y=y, f(x | y), and of Y given X=x, f(y | x), are obtained by dividing f(x,y) by f(y) to get f(x | y) and by f(x) to get f(y | x). f(x | y) = f(y | x) = f(x,y) f(y) f(x) Conditional pdf

40 1A.40 Copyright© 1977 John Wiley & Son, Inc. All rights reserved.15.05.45.35 conditonal Y = 1 Y = 2 X = 0 X = 1.60.40.50.25.75.875.125.90.10.70.30 f (Y=2 | X= 0)=.25 f (Y=1 | X = 0)=.75 f (Y=2 | X = 1)=.875 f (X=0 | Y=2)=.30 f (X=1 | Y=2)=.70 f (X=0 | Y=1)=.90 f (X=1 | Y=1)=.10 f (Y=1 | X = 1)=.125

41 1A.41 Copyright© 1977 John Wiley & Son, Inc. All rights reserved X and Y are independent random variables if their joint pdf, f(x,y), is the product of their respective marginal pdfs, f(x) and f(y). f(x i,y j ) = f(x i ) f(y j ) for independence this must hold for all pairs of i and j Independence

42 1A.42 Copyright© 1977 John Wiley & Son, Inc. All rights reserved.15.05.45.35 not independent Y = 1 Y = 2 X = 0 X = 1.60.40.50 f (X = 1) f (X = 0) f (Y = 1) f (Y = 2) marginal pdf for Y: marginal pdf for X:.50x.60=.30.50x.40=.20 The calculations in the boxes show the numbers required to have independence.

43 1A.43 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The correlation between two random variables X and Y is their covariance divided by the square roots of their respective variances. Correlation is a pure number falling between -1 and 1. cov(X,Y)  (X,Y) = var(X) var(Y) Correlation

44 1A.44 Copyright© 1977 John Wiley & Son, Inc. All rights reserved.15.05.45.35 Y = 1 Y = 2 X = 0 X = 1.60.40.50 EX=.40 EY=1.50 cov(X,Y) =.15 correlation EX=0(.60)+1(.40)=.40 2 2 2 var(X) = E(X ) - ( EX) =.40 - (.40) =.24 2 2 2 EY=1(.50)+2(.50) =.50 + 2.0 = 2.50 2 22 var(Y) = E(Y ) - ( EY) = 2.50 - (1.50) =.25 2 2 2  (X,Y) = cov(X,Y) var(X) var(Y)  (X,Y) =.61

45 1A.45 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Independent random variables have zero covariance and, therefore, zero correlation. The converse is not true. Zero Covariance & Correlation

46 1A.46 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The expected value of the weighted sum of random variables is the sum of the expectations of the individual terms. Since expectation is a linear operator, it can be applied term by term. E[c 1 X + c 2 Y] = c 1 EX + c 2 EY E[c 1 X 1 +...+ c n X n ] = c 1 EX 1 +...+ c n EX n In general, for random variables X 1,..., X n :

47 1A.47 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The variance of a weighted sum of random variables is the sum of the variances, each times the square of the weight, plus twice the covariances of all the random variables times the products of their weights. var(c 1 X + c 2 Y)=c 1 var(X)+c 2 var(Y) + 2c 1 c 2 cov(X,Y) 2 2 var(c 1 X  c 2 Y) = c 1 var(X)+c 2 var(Y)  2c 1 c 2 cov(X,Y) 2 2 Weighted sum of random variables: Weighted difference of random variables:

48 1A.48 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The Normal Distribution Y ~ N( ,  2 ) f(y) = 2  22  2 1 exp  y f(y) 2  2 (y -  ) 2 -

49 1A.49 Copyright© 1977 John Wiley & Son, Inc. All rights reserved The Standardized Normal Z ~ N( ,  ) f(z) = 2 2  1 exp 2 z2z2 - Z = (y -  )/ 

50 1A.50 Copyright© 1977 John Wiley & Son, Inc. All rights reserved P [ Y > a ] = P > = P Z > a -  Y -      y f(y) a Y ~ N( ,  2 )

51 1A.51 Copyright© 1977 John Wiley & Son, Inc. All rights reserved P [ a < Y < b ] = P < < = P < Z < a -  Y -    b -   a -   b -    y f(y) a Y ~ N( ,  2 ) b

52 1A.52 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Y 1 ~ N(  1,  1 2 ), Y 2 ~ N(  2,  2 2 ),..., Y n ~ N(  n,  n 2 ) W = c 1 Y 1 + c 2 Y 2 +... + c n Y n Linear combinations of jointly normally distributed random variables are themselves normally distributed. W ~ N [ E(W), var(W) ]

53 1A.53 Copyright© 1977 John Wiley & Son, Inc. All rights reserved mean: E[V] = E[  (m) ] = m If Z 1, Z 2,..., Z m denote m independent N(0,1) random variables, and V = Z 1 + Z 2 +... + Z m, then V ~  (m) 22 2 2 V is chi-square with m degrees of freedom. Chi-Square variance: var[V] = var[  (m) ] = 2m If Z 1, Z 2,..., Z m denote m independent N(0,1) random variables, and V = Z 1 + Z 2 +... + Z m, then V ~  (m) 22 2 2 V is chi-square with m degrees of freedom. 2 2

54 1A.54 Copyright© 1977 John Wiley & Son, Inc. All rights reserved mean: E[ t ] = E[ t (m) ] = 0 symmetric about zero variance: var[ t ] = var[ t (m) ] = m/(m - 2 ) If Z ~ N(0,1) and V ~  (m) and if Z and V are independent then, ~ t (m) t is student-t with m degrees of freedom. 2 t = Z V m Student - t

55 1A.55 Copyright© 1977 John Wiley & Son, Inc. All rights reserved If V 1 ~  (m 1 ) and V 2 ~  (m 2 ) and if V 1 and V 2 are independent, then ~ F (m 1,m 2 ) F is an F statistic with m 1 numerator degrees of freedom and m 2 denominator degrees of freedom. 2 F = V1V1 m1m1 V2V2 m2m2 2 F Statistic


Download ppt "1A.1 Copyright© 1977 John Wiley & Son, Inc. All rights reserved Review Some Basic Statistical Concepts Appendix 1A."

Similar presentations


Ads by Google