Presentation is loading. Please wait.

Presentation is loading. Please wait.

Virtual University of Pakistan Lecture No. 26 Statistics and Probability Miss Saleha Naghmi Habibullah.

Similar presentations


Presentation on theme: "Virtual University of Pakistan Lecture No. 26 Statistics and Probability Miss Saleha Naghmi Habibullah."— Presentation transcript:

1 Virtual University of Pakistan Lecture No. 26 Statistics and Probability Miss Saleha Naghmi Habibullah

2 IN THE LAST LECTURE, YOU LEARNT Mathematical Expectation, Variance & Moments of Continuous Probability Distributions Bivariate Probability Distribution (Discrete case)+

3 TOPICS FOR TODAY BIVARIATE Probability Distributions (Discrete and Continuous) Properties of Expected Values in the case of Bivariate Probability Distributions

4 You will recall that, in the last lecture we began the discussion of the example in which we were drawing 2 balls out of an urn containing 3 black, 2 red and 3 green balls, and you will remember that, in this example, we were interested in computing quite a few quantities.

5 I encouraged you to compute the probabilities of the various possible combinations of x and y values, and, in particular, I motivated you to think about the 3 probabilities that were equal to zero. Let us re-commence the discussion of this particular example:

6 EXAMPLE: An urn contains 3 black, 2 red and 3 green balls and 2 balls are selected at random from it. If X is the number of black balls and Y is the number of red balls selected, then find

7

8 As indicated in the last lecture, using the rule of combinations in conjunction with the classical definition of probability, the probability of the first cell came out to be 3/28. By similar calculations, we obtain all the remaining probabilities, and, as such, we obtain the following bivariate table:

9

10 This joint p.d. of the two r.v.’s (X, Y) can be represented by the formula

11 ii)To compute P(X + Y < 1), we see that x + y < 1 for the cells (0, 0), (0, 1) and (1, 0). Therefore P(X + Y < 1) = f(0, 0) + f(0, 1) + f(1, 0) = 3/28 +6/28 + 9/28 = 18/28 = 9/14

12

13 iii)The marginal p.d.’s are:

14 iv)By definition, the conditional p.d. f(x | 1) is f(x | 1) = P(X = x | Y = 1)

15

16

17 Hence the conditional p.d. of X given that Y = 1, is

18 v)Finally, P(X = 0 | Y = 1) = f(0 | 1) = 1/2

19

20

21 Next, we consider the concept of BIVARIATE CONTINUOUS PROBABILITY DISTRIBUTION:

22 CONTINUOUS BIVARIATE DISTRIBUTIONS: The bivariate probability density function of continuous r.v.’s X and Y is an integrable function f(x,y) satisfying the following properties:

23

24 Let us try to understand the graphic picture of a bivariate continuous probability distribution:

25 The region of the XY-plane depicted by the interval (x 1 < X < x 2 ; y 1 < Y < y 2 ) is shown graphically:

26 (x 1, y 2 ) (x 1, y 1 ) y2y2 y1y1 0x1x1 x2x2 X Y (x 2, y 2 ) (x 2, y 1 )

27 Just as in the case of a continuous univariate situation, the probability function f(x) gives us a curve under which we compute areas in order to find various probabilities, in the case of a continuous bivariate situation, the probability function f(x,y) gives a SURFACE

28 and, when we compute the probability that our random variable X lies between x 1 and x 2 AND, simultaneously, the random variable Y lies between y 1 and y 2, we will be computing the VOLUME under the surface given by our probability function f(x, y) encompassed by this region.

29 The MARGINAL p.d.f. of the continuous r.v. X is and that of the r.v. Y is

30 That is, the marginal p.d.f. of any of the variables is obtained by integrating out the other variable from the joint p.d.f. between the limits –  and + .

31 The CONDITIONAL p.d.f. of the continuous r.v. X given that Y takes the value y, is defined to be where f(x,y) and h(y) are respectively the joint p.d.f. of X and Y, and the marginal p.d.f. of Y, and h(y) > 0.

32 Similarly, the conditional p.d.f. of the continuous r.v. Y given that X = x, is provided that g(x) > 0.

33 It is worth noting that the conditional p.d.f’s satisfy all the requirements for the UNIVARIATE density function.

34 Finally:

35 Two continuous r.v.’s X and Y are said to be Statistically Independent, if and only if their joint density f(x,y) can be factorized in the form f(x,y) = g(x)h(y) for all possible values of X and Y.

36 EXAMPLE Given the following joint p.d.f.

37

38 c)Find the marginal p.d.f. g(x) and h(y). d)Find the conditional p.d.f. f(x | y) and f(y | x).

39 SOLUTION a)The joint density f(x,y) will be a p.d.f. if (i)f(x,y) > 0 and (ii)

40 Now f(x,y) is clearly greater than zero for all x and y in the given region, and

41

42 Thus f(x,y) has the properties of a joint p.d.f.

43 b)To determine the probability of a value of the r.v. (X, Y) falling in the region X < 3/2, Y < 5/2, we find

44

45 c)The marginal p.d.f. of X is

46

47

48

49

50 Next, we consider two important properties of mathematical expectation which are valid in the case of BIVARIATE probability distributions:

51 PROPERTY NO. 1 The expected value of the sum of any two random variables is equal to the sum of their expected values, i.e. E(X + Y) = E(X) + E(Y).

52 PROPERTY NO. 2 The expected value of the product of two independent r.v.’s is equal to the product of their expected values, i.e. E(XY) = E(X) E(Y). The result also holds for the difference of r.v.’s i.e. E(X – Y) = E(X) – E(Y).

53 It should be noted that these properties are valid for continuous random variable’s in which case the summations are replaced by integrals.

54 Let us now verify these properties for the following example:

55 EXAMPLE: Let X and Y be two discrete r.v.’s with the following joint p.d.

56 Find E(X), E(Y), E(X + Y), and E(XY).

57 SOLUTION To determine the expected values of X and Y, we first find the marginal p.d. g(x) and h(y) by adding over the columns and rows of the two-way table as below:

58

59 Now E(X) =  x j g(x j ) = 2 × 0.40 + 4 × 0.60 = 0.80 + 2.40 = 3.2 E(Y) =  y i h(y i ) = 1 × 0.25 + 3 × 0.50 + 5 × 0.25 = 0.25 + 1.50 + 1.25 = 3.0 Hence E(X) + E(Y) = 3.2 + 3.0 = 6.2

60

61 In order to compute E(XY) directly, we apply the formula:

62 In the next lecture, we will discuss these two formulae in detail, and, interestingly, we will find that not only E(X+Y) equals E(X) + E(Y), but also E(XY) equals E(X) E(Y) implying that the random variables X and Y are statistically independent.

63 IN TODAY’S LECTURE, YOU LEARNT BIVARIATE Probability Distributions (Discrete and Continuous) Properties of Expected Values in the case of Bivariate Probability Distributions

64 IN THE NEXT LECTURE, YOU WILL LEARN Properties of Expected Values in the case of Bivariate Probability Distributions (Detailed discussion) Covariance & Correlation Some Well-known Discrete Probability Distributions: Discrete Uniform Distribution


Download ppt "Virtual University of Pakistan Lecture No. 26 Statistics and Probability Miss Saleha Naghmi Habibullah."

Similar presentations


Ads by Google