Presentation is loading. Please wait.

Presentation is loading. Please wait.

Section 6.1 Let X 1, X 2, …, X n be a random sample from a distribution described by p.m.f./p.d.f. f(x ;  ) where the value of  is unknown; then  is.

Similar presentations


Presentation on theme: "Section 6.1 Let X 1, X 2, …, X n be a random sample from a distribution described by p.m.f./p.d.f. f(x ;  ) where the value of  is unknown; then  is."— Presentation transcript:

1 Section 6.1 Let X 1, X 2, …, X n be a random sample from a distribution described by p.m.f./p.d.f. f(x ;  ) where the value of  is unknown; then  is called a parameter. The set  of possible values of  is called the parameter space. We can use the actual observed values x 1, x 2, …, x n of X 1, X 2, …, X n in order to estimate . The function u(X 1, X 2, …, X n ) used to estimate  is called an estimator, and the actual observed value u(x 1, x 2, …, x n ) of the estimator is called an estimate. L(  ) =is called the likelihood function. If the likelihood function L(  ) is maximized when  = u(x 1, x 2, …, x n ), then  = u(X 1, X 2, …, X n ) is called the maximum likelihood estimator (mle) of . (When attempting to maximize L(  ) using derivatives, it is often easier to maximize ln[L(  )].) f(x 1 ;  ) f(x 2 ;  ) … f(x n ;  ) = n  f(x i ;  ) i = 1 ^

2 The preceding discussion can be generalized in a natural way by replacing the parameter  by two or more parameters (  1,  2, …).

3 1. Consider each of the following examples (in the textbook). (a) Consider a random sample X 1, X 2, …, X n from a Bernoulli distribution with success probability p. This sample will consist of 0s and 1s. What would be an “intuitively reasonable” formula for an estimate of p? n  X i i = 1 —— = X n Look at the derivation of the maximum likelihood estimator of p at the beginning of Section 6.1. L(p) = p (1  p)for 0  p  1 n  x i i = 1 n n   x i i = 1 Imagine that a penny is spun very fast on its side on a flat, hard surface. Let each X i be 1 when the penny comes to rest with heads facing up, and 0 otherwise. ln L(p) = ln p + ln(1  p) n  x i i = 1 n n   x i i = 1

4 n  X i i = 1 —— = X n p = ^ d [ln L(p)] ────── = dp n  x i i = 1 n n   x i i = 1 ────  p ────── 1  p d [ln L(p)] ────── = 0  p = dp n  x i i = 1 ──── = x n d 2 [ln L(p)] ────── = dp 2 n  x i i = 1 n n   x i i = 1  ────  p 2 ────── (1  p) 2 Notice that the second derivative is negative for all 0 < p < 1, which implies p = x does indeed maximize ln L(p). x is an estimate of p, and is the maximum likelihood estimator of p.

5 (b)Consider a random sample X 1, X 2, …, X n from a geometric distribution with success probability p. This sample will consist of positive integers. Decide if there is an “intuitively reasonable” formula for an estimate of p, and then look at the derivation of the maximum likelihood estimator of p in Text Example 6.1-2. Imagine that a penny is spun very fast on its side on a flat, hard surface. Let each X i be the number of spins until the penny comes to rest with heads facing up. n  X i = i = 1 the total number of spinsn =the total number of heads 1 — X n —— = n  X i i = 1 L(p) = p n (1  p)for 0  p  1 n  x i  n i = 1 ln L(p) = n ln p + ln(1  p) n  x i  n i = 1

6 1 — X n —— = n  X i i = 1 p = ^ d [ln L(p)] ────── = dp n  x i  n i = 1 n ──  p ────── 1  p d [ln L(p)] ────── = 0  p = dp n  x i i = 1 n 1 ──── =─ x d 2 [ln L(p)] ────── = dp 2 n  x i  n i = 1 n  ──  p 2 ────── (1  p) 2 Notice that the second derivative is negative for all 0 < p < 1, which implies p = 1/ x does indeed maximize ln L(p). 1/ x is an estimate of p, and is the maximum likelihood estimator of p.

7 (c)Consider a random sample X 1, X 2, …, X n from an exponential distribution with mean . This sample will consist of positive real numbers. Decide if there is an “intuitively reasonable” formula for an estimate of , and then look at the derivation of the maximum likelihood estimator of  in Text Example 6.1-1. n  X i i = 1 —— = X n L(  ) = exp ──────── for 0    n n   x i i = 1 ────  ln L(  ) = n   x i i = 1 ────  n ln(  )  d [ln L(  )] ────── = d  n  x i i = 1 ────   2 n ── 

8 d [ln L(p)] ────── = 0   = d  n  x i i = 1 —— = x n d 2 [ln L(p)] ────── = d  2 n  2  x i i = 1 ──── +  3 n ──  2 Substituting  = into the second derivative results in n  x i i = 1 —— n  n 3 ——— n  x i i = 1 2 which is negative, implying  = does indeed maximize ln L(  ). n  x i i = 1 —— n  X i i = 1 —— = X n  = ^ x is an estimate of , and is the maximum likelihood estimator of .

9 2. (a) Consider the situation described immediately following Text Example 6.1-4. Suppose X 1, X 2, …, X n is a random sample from a gamma( ,1) distribution. Note how difficult it is to obtain the maximum likelihood estimator of . Then, find the method of moments estimator of . L(  ) = exp ──────────────── for 0   [  (  )] n n   x i i = 1 n  x i i = 1   1 ln L(  ) = (   1) ln n  x i i = 1 n   x i i = 1  n ln  (  ) d [ln L(  )] ────── = d  ln n  x i i = 1 n  / (  )  ────  (  ) When this derivative is set equal to 0, it is not possible to solve for , since there is no easy formula for  / (  ) or  (  ).

10 The preceding discussion can be generalized in a natural way by replacing the parameter  by two or more parameters (  1,  2, …). A method of moments estimator  of a parameter  is one obtained by setting moments of the distribution from which a sample is taken equal to the corresponding moments of the sample, that is, When the number of equations is equal to the number of unknowns, the unique solution is the method of moments estimator. ~ n  (X i / n), i = 1 E(X) = n  (X i 2 / n), i = 1 E(X 2 ) = n  (X i 3 / n), etc. i = 1 E(X 3 ) = If E[u(X 1, X 2, …, X n )] = , then the statistic u(X 1, X 2, …, X n ) is called an unbiased estimator of  ; otherwise, the estimator is said to be biased.(Text Definition 6.1-1)

11 2. (a) Consider the situation described immediately following Text Example 6.1-4. Suppose X 1, X 2, …, X n is a random sample from a gamma( ,1) distribution. Note how difficult it is to obtain the maximum likelihood estimator of . Then, find the method of moments estimator of . n  (X i / n) i = 1 E(X) =  (1) =  = n  (X i / n) i = 1 n  X i i = 1 —— = X n  = ~ is the method of moments estimator of .

12 (b) Suppose X 1, X 2, …, X n is a random sample from a gamma( ,3) distribution. Again, note how difficult it is to obtain the maximum likelihood estimator of . Then, find the method of moments estimator of . L(  ) = exp ──────────────── for 0   [  (  ) 3  ] n n   x i i = 1 ──── 3 n  x i i = 1   1 ln L(  ) = (   1) ln n  x i i = 1 n  x i i = 1  ──── 3  n ln  (  )   n ln3 d [ln L(  )] ────── = d  ln n  x i i = 1 n  / (  )  ────  (  )  n ln3 When this derivative is set equal to 0, it is not possible to solve for , since there is no easy formula for  / (  ) or  (  ).

13 (b) Suppose X 1, X 2, …, X n is a random sample from a gamma( ,3) distribution. Again, note how difficult it is to obtain the maximum likelihood estimator of . Then, find the method of moments estimator of . n  (X i / n) i = 1 E(X) =  (3) = 3  = n  (X i / n) i = 1 n  X i i = 1 —— = 3n  = ~ is the method of moments estimator of .  = n  X i i = 1 —— 3n X — 3

14 Study Text Example 6.1-5, find the maximum likelihood estimator, and compare the maximum likelihood estimator with the method of moments estimator. 3. L(  ) = n  x i for 0   i = 1   1  n n ln L(  ) =n ln  + (   1) ln n  x i i = 1 d [ln L(  )] ────── = d  n ──  + ln n  x i i = 1 d [ln L(p)] ────── = 0   = d   n ———— =  n ——— n  ln x i i = 1 ln n  x i i = 1

15 d 2 [ln L(p)] ────── = d  2 n  ──  2 Notice that the second derivative is negative for all 0  , which implies  = does indeed maximize ln L(  ).  n ——— n  ln x i i = 1  = ^  n ——— n  ln X i i = 1 is the maximum likelihood estimator of , which is not equal to the method of moments estimator of  derived in Text Example 6.1-5. d [ln L(p)] ────── = 0   = d   n ———— =  n ——— n  ln x i i = 1 ln n  x i i = 1

16 4. (a) A random sample X 1, X 2, …, X n is taken from a N( ,  2 ) distribution. The following are concerned with the estimation of one of, or both of, the parameters in the N( ,  2 ) distribution Study Text Example 6.1-3, and note that in order to verify that the solution point truly maximizes the likelihood function, we must verify that  2 (ln L) ———  1 2  2 (ln L) ———  2 2 –  2 (ln L) ———  1  2 2 > 0at the solution point and that  2 (ln L) ———  1 2 < 0at the solution point.  2 (lnL) ——— =  1 2  2 (lnL) ——— =  2 2 – n ——  2 n 1 n — –—  (x i –  1 ) 2 2  2 2  2 3 i=1  2 (lnL) ——— =  1  2 –1 n —  (x i –  1 )  2 2 i=1

17 When  1 = x and  2 =, then 1 n —  (x i – x) 2 n i = 1  2 (lnL) ——— =  1 2  2 (lnL) ——— =  2 2 – n 2 ———— n  (x i – x) 2 i=1 – n 3 ————— n 2  (x i – x) 2 i=1  2 (lnL) ——— =  1  2 0 2 We see then that  2 (lnL) ———  1 2  2 (lnL) ——— –  2 2  2 (lnL) ——— =  1  2 n 5 —————> 0 n 2  (x i – x) 2 i=1 3 and that  2 (lnL) ——— =  1 2 – n 2 ———— < 0. n  (x i – x) 2 i=1 2 Study Text Example 6.1-6. (b)

18 The preceding discussion can be generalized in a natural way by replacing the parameter  by two or more parameters (  1,  2, …). A method of moments estimator  of a parameter  is one obtained by setting moments of the distribution from which a sample is taken equal to the corresponding moments of the sample, that is, When the number of equations is equal to the number of unknowns, the unique solution is the method of moments estimator. ~ If E[u(X 1, X 2, …, X n )] = , then the statistic u(X 1, X 2, …, X n ) is called an unbiased estimator of  ; otherwise, the estimator is said to be biased.(Text Definition 6.1-1) n  (X i / n), i = 1 E(X) = n  (X i 2 / n), i = 1 E(X 2 ) = n  (X i 3 / n), etc. i = 1 E(X 3 ) =

19 4.-continued (c) (d) Do Text Exercise 6.1-2. L(  ) = n 1 (x i –  ) 2  ——— exp – ———= i=1 (2  ) 1/2 2  1 ——— exp ————— (2  ) n/2 2  n –  (x i –  ) 2 i=1 ln L(  ) = n 1 n – — ln(2  ) –—  (x i –  ) 2 22  i=1 d(lnL) ——— = d  n 1 n – — +—  (x i –  ) 2 2  2  2 i=1 = 0 Study Text Example 6.1-4.

20 d(lnL) ——— = d  n 1 n – — +—  (x i –  ) 2 2  2  2 i=1 = 0  = 1 n —  (x i –  ) 2 n i=1 d 2 (lnL) ——— = d  2 n 1 n — –—  (x i –  ) 2 2  2  3 i=1 When  =, then d 2 (lnL) ——— = d  2 – n 3 ————— n 2  (x i –  ) 2 i=1 2 1 n —  (x i –  ) 2 n i=1 < 0 which implies that L has been maximized. 1 n —  (X i –  ) 2 n i=1  = is the mle (maximum likelihood estimator) of  =  2. ^ E(  ) = E ^ 1 n —  (X i –  ) 2 = n i=1  2 n (X i –  ) 2 — E  ———— = n i=1  2 (Now look at Corollary 5.4-3.)

21 E(  ) = E ^ 1 n —  (X i –  ) 2 = n i=1  2 n (X i –  ) 2 — E  ———— = n i=1  2  2 — n =  2 n Consequently,  = is an unbiased estimator of  =  2. ^ 1 n —  (X i –  ) 2 n i=1

22 4.-continued (e) Return to Text Exercise 6.1-2, and find the maximum likelihood estimator of  =  ; then decide whether or not this estimator is unbiased. L(  ) = n 1 (x i –  ) 2  ——— exp – ———= i=1 (2  ) 1/2  2  2 1 ——— exp ————— (2  ) n/2  n 2  2 n –  (x i –  ) 2 i=1 ln L(  ) = n 1 n – — ln(2  ) –n ln  – —  (x i –  ) 2 2 2  2 i=1 d(lnL) ——— = d  n 1 n – — +—  (x i –  ) 2  3 i=1 = 0

23  = 1 n —  (x i –  ) 2 n i=1 d 2 (lnL) ——— = d  2 n 3 n — –—  (x i –  ) 2  2  4 i=1 1/2 When  =, then d 2 (lnL) ——— = d  2 – 2n 2 ———— n  (x i –  ) 2 i=1 1 n —  (x i –  ) 2 n i=1 < 0 which implies that L has been maximized. 1/2 1 n —  (X i –  ) 2 n i=1  = is the mle (maximum likelihood estimator) of  = . ^ 1/2 Recall from part (d): 1 n —  (X i –  ) 2 n i=1  = is the mle (maximum likelihood estimator) of  =  2. ^

24 1 n —  (X i –  ) 2 n i=1  = is the mle (maximum likelihood estimator) of  = . ^ 1/2 Recall from part (d): 1 n —  (X i –  ) 2 n i=1  = is the mle (maximum likelihood estimator) of  =  2. ^ It can be shown that the mle of a function of a parameter  is that same function of the mle of . However, as we have just seen, the expected value of a function is not generally equal to that function of the expected value. We now want to know whether or not this estimator is unbiased. We proved that this estimator is unbiased.

25 E(  ) = E ^ 1 n —  (X i –  ) 2 = n i=1  n (X i –  ) 2 — E  ———— =  n i=1  2 1/2 ??? We need to find the expected value of the square root of a random variable having a chi-square distribution Suppose random variable U has a  2 (r) distribution. Then, using a technique similar to that used in Text Exercises 5.2-3 and 5.5-17, we can show that  ([r+1]/2) E(U 1/2 ) =  2————  (r/2) (and you will do this as part of Text Exercise 6.1-14).

26 E(  ) = ^  n (X i –  ) 2 — E  ———— =  n i=1  2 1/2  ([r+1]/2) E(U 1/2 ) =  2————  (r/2)  ([n+1]/2) —  2———— =  n  (n/2)  2  ([n+1]/2)  ——————  n  (n/2) 1 n —  (X i –  ) 2 n i=1  = is a biased estimator of  = . ^ 1/2 An unbiased estimator of  =  is  = ^  n  (n/2) ——————  2  ([n+1]/2) n  (X i –  ) 2 i=1 1/2  (n/2) ——————  2  ([n+1]/2) (This is similar to what you need to do as part of Text Exercise 6.1-14).

27 5. (a) Suppose X 1, X 2, …, X n is a random sample from any distribution with finite mean  and finite variance  2. If the distribution from which the random sample is taken is a normal distribution, then from Corollary 5.5-1 we know that X has a N( ,  2 /n) distribution This implies that X is an unbiased estimator of  and has variance  2 /n ; show that this is still true even when the distribution from which the random sample is taken is not a normal distribution. No matter what distribution the random sample is taken from, Theorem 5.3-3 tells us that since n  X i i = 1 X = —— n =  (1/n)X i,then i = 1 n  (1/n)  = i = 1 E(X) = , and n  (1/n) 2  2 = i = 1 Var(X) = n(1/n) 2  2 = 2/n2/n The Central Limit Theorem tells us that when n is sufficiently large, the distribution of X is approximately a normal distribution.

28 (b)If the distribution from which the random sample is taken is a normal distribution, then from Text Example 6.1-4 we know that S 2 is an unbiased estimator of  2. In Text Exercise 6.1-13, you will show that S 2 is an unbiased estimator of  2 even when the distribution from which the random sample is taken is not a normal distribution. Show that Var(S 2 ) depends on the distribution from which the random sample is taken. Var(S 2 ) = E[(S 2 –  2 ) 2 ] = n  X i 2 – i = 1 E n  X i i = 1 ——— n 2 2 –  2 We see that Var(S 2 ) will depend on E(X i ), E(X i 2 ), E(X i 3 ), and E(X i 4 ). We know that E(X i ) = and E(X i 2 ) = no matter what type of distribution the random sample is taken from, but  2 +  2 E(X i 3 ) and E(X i 4 ) will depend on the type of distribution the random sample is taken from. If the random sample is taken from a normal distribution, Var(S 2 ) = n – 1

29 If the random sample is taken from a normal distribution, Var(S 2 ) = 1 n ——  (X i – X) 2 = n – 1 i =1  4 n (X i –X) 2 ——— Var  ———— = (n – 1) 2 i=1  2 (Now look at Theorem 5.5-2.) Var  4 ——— (2)(n  1) = (n – 1) 2 2  4 —— n  1

30 6. (a) (b) Let X 1, X 2 be a random sample of size n = 2 from a gamma( ,1) distribution (i.e.,  > 0). Show that X and S 2 are each an unbiased estimator of . First, we recall that E(X) = and E(S 2 ) = for a random sample taken from any distribution with mean  and variance  2. Next, we observe that for a gamma( ,  ) distribution,  =and  2 =. Consequently, with  = 1, we have E(X) = E(S 2 ) = so that X and S 2 are each Decide which of X and S 2 is the better unbiased estimator of . When we are faced with choosing between two (or more) unbiased estimators, we generally prefer the estimator with the variance. smaller Recall that Var(X) =  / n =  / 2.  2 22 , an unbiased estimator of .  2 / n =

31 To find Var(S 2 ), we observe (as done in Class Exercise 5.5-3(a)) that S 2 = (X 1 – X) 2 + (X 2 – X) 2 ———————— = 2 – 1 X 1 + X 2 X 1 –——— + X 2 –——— = 2 2 2 X 1 – X 2 X 2 – X 1 ——— +——— = 2 22 X 1 – X 2 ——— = 2 2 2 X 1 2 – 2X 1 X 2 + X 2 2 ——————— 2 Var(S 2 ) = E[(S 2 ) 2 ] – [E(S 2 )] 2 = X 1 2 – 2X 1 X 2 + X 2 2 ———————– ( ) 2 = 2 E 2 

32 Var(S 2 ) = E[(S 2 ) 2 ] – [E(S 2 )] 2 = X 1 2 – 2X 1 X 2 + X 2 2 ———————–  2 = 2 E X 1 4 + 4X 1 2 X 2 2 + X 2 4 – 4X 1 3 X 2 + 2X 1 2 X 2 2 – 4X 1 X 2 3 ———————————————————–  2 = 4 E 2E(X 4 ) + 6E(X 2 )E(X 2 ) – 8E(X 3 )E(X) – 4  2 —————————————————= 4 2M //// (0) + 6[M // (0)] 2 – 8M /// (0)  – 4  2 ———————————————— 4 2 6.-continued

33 M(t) = M / (t) =  ——— (1 – t)  +1 M // (t) =  (  + 1) ———— (1 – t)  +2 M /// (t) =  (  + 1)(  + 2) ——————— (1 – t)  +3 M //// (t) =  (  + 1)(  + 2)(  + 3) ————————— (1 – t)  +4 M // (0) =  (  + 1) M /// (0) =  (  + 1)(  + 2) M //// (0) =  (  + 1)(  + 2)(  + 3) 1 ——— (1 – t) 

34 Var(S 2 ) = 2M //// (0) + 6[M // (0)] 2 – 8M /// (0)  – 4  2 ———————————————— = 4 2  (  + 1)(  + 2)(  + 3) + 6  2 (  + 1) 2 – 8  2 (  + 1)(  + 2) – 4  2 —————————————————————————— = 4 6  (  + 1)(  + 2) + 6  2 (  + 1) 2 – 6  2 (  + 1)(  + 2) – 4  2 ———————————————————————— = 4 6.-continued 6  (  + 1)(  + 2) – 6  2 (  + 1) – 4  2 ——————————————— = 4 12  (  + 1) – 4  2 ——————— = 4 8  2 + 12  ———— = 4 2  2 + 3  >  / 2 = Var(X) Consequently, the better unbiased estimator of  is X. (Note that the same type of approach used here would be needed to prove the result in part (d) of Text Exercise 6.1-4.)

35 7. (a) Let X 1, X 2, …, X n be random sample from a U(0,  ) distribution. Find the expected value of X ; then find a constant c so that W 1 = cX is an unbiased estimator of . First, we recall that E(X) = for a random sample taken from any distribution with mean  and variance  2. Next, we observe that for a U(0,  ) distribution,  =and  2 =. Consequently, we have E(X) = and E(cX) = , if we let c =. 2 Therefore, W 1 = is  /2  2 /12   /2, an unbiased estimator of . 2X2X

36 (b)Find the expected value of Y n = max(X 1, X 2, …, X n ), which is the largest order statistic; then find a constant c so that W 2 = cY n is an unbiased estimator of . Since Y n is the largest order statistic, and its p.d.f. is g n (y) = n [F(y)] n–1 f(y) for 0 < y <  where f(x) and F(x) are respectively the p.d.f and d.f. for the distribution from the random sample is taken. f(x) =F(x) = 1 — for 0 < x <   if x  0 0 if 0 < x   x /  if  < x 1 E(Y n ) = 0  y n [F(y)] n–1 f(y) dy = 0  y n [y /  ] n–1 [1 /  ] dy =

37 0  ny n — dy =  n ny n+1 ——— = (n+1)  n y = 0  n ——  n + 1 We have E(cY n ) = , if we let c =. n + 1 —— n Therefore, W 2 = is n + 1 —— Y n n an unbiased estimator of .

38 7.-continued (c) Decide which of W 1 or W 2 is the better unbiased estimator of . When we are faced with choosing between two (or more) unbiased estimators, we generally prefer the estimator with the variance. smaller Var(W 1 ) = Var(2X) = 4Var(X) = 4  2 / n =4(  2 / 12) / n =  2 / (3n). Var(W 2 ) = Var = n + 1 —— Y n n n + 1 —— n Var(Y n ) = 2 {E(Y n 2 ) – [E(Y n )] 2 } = n + 1 —— n 2 This we already know. This we need to find out.

39 E(Y n 2 ) = 0  y 2 n [y /  ] n–1 [1 /  ] dy = 0  ny n+1 — dy =  n ny n+2 ——— = (n+2)  n y = 0  n ——  2 n + 2 Var(W 2 ) ={E(Y n 2 ) – [E(Y n )] 2 } = n + 1 —— n 2 n + 1 —— n 2 ——  2 n + 2 n – ——  n + 1 2 (n + 1) 2 = ———— – 1  2 = n(n + 2) 1 ———  2 n(n + 2)  2 — = Var(W 1 ) 3n Consequently, the better unbiased estimator of  is W 2. 

40 f(x) = 1 — for 0 < x <   L(  ) = n  f(x i ;  ) = i=1 1 —  n Since  > 0, the value for  which maximizes L(  ) is the smallest possible positive value for .  must be no smaller than each of the observed values x 1, x 2, …, x n ; otherwise L(  ) = 0. Consequently, L(  ) is maximized when  = max(x 1, x 2, …, x n ). It follows that the mle of  must be the largest order statistic Y n = max(X 1, X 2, …, X n ). 7.-continued (d) Explain why Y n is the mle of .

41 8. (a) Let X represent a random variable having the U(  – 1/2,  + 1/2) distribution from which the random sample X 1, X 2, X 3 is taken, and let Y 1 < Y 2 < Y 3 be the order statistics for the random sample. We consider the statistics W 1 = X = (the sample mean), W 2 = Y 2 (the sample median), W 3 =(the sample midrange). X 1 + X 2 + X 3 ————— 3 Y 1 + Y 3 ——— 2 Find E(W 1 ) and Var(W 1 ). For a U(  – 1/2,  + 1/2) distribution, we have  =  b + a —— = 2  + 1/2 +  – 1/2 ——————— = 2 Text Exercise 8.3-14 is closely related to this Exercise.

42 Consequently, E(W 1 ) = E(X) = and Var(W 1 ) = Var(X) =  1 —– = 12n 1 — 36 For a U(  – 1/2,  + 1/2) distribution, we have  =  2 =  b + a —— = 2  + 1/2 +  – 1/2 ——————— = 2 (b – a) 2 ——— = 12 (  + 1/2 – [  – 1/2]) 2 ———————— = 12 1 — 12

43 Find E(W 2 ) and Var(W 2 ). (This can be done easily by using Class Exercise #8.3-4(b).) 8.-continued (b) E(W 2 ) = E(Y 2 ) =  Var(W 2 ) = Var(Y 2 ) = 1 — 20 b + a —— = 2  + 1/2 +  – 1/2 ——————— = 2 (b – a) 2 ———– = 20 (  + 1/2 – [  – 1/2]) 2 ———–————— = 20

44 Find E(W 3 ) and Var(W 3 ). (This can be done easily by using Class Exercise #8.3-4(b).) (c) Y 1 + Y 3 E(W 3 ) = E ——— = 2 E(Y 1 ) + E(Y 3 ) —————— = 2 b + 3a ——— + 4 3b + a ——— 4 ———————— = 2  b + a —— = 2  + 1/2 +  – 1/2 ——————— = 2 Y 1 + Y 3 Var(W 3 ) = Var ——— = 2 1 — Var(Y 1 + Y 3 ) = 4 Var(Y 1 ) + Var(Y 3 ) + 2Cov(Y 1, Y 3 ) —————————————— = 4 8.-continued

45 Var(Y 1 ) + Var(Y 3 ) + 2Cov(Y 1, Y 3 ) —————————————— = 4 Var(Y 1 ) + Var(Y 3 ) + 2[E(Y 1 Y 3 ) – E(Y 1 )E(Y 3 )] —————————————————— = 4 [b – a] 2 ——— + ab – 5 —————————————————————————— = 4 3(b – a) 2 ———– + 80 3(b – a) 2 ———– + 2 80 b + 3a ——— 4 3b + a ——— 4 1 1 — +  2 – — – 5 4 —————————————————————— = 4 3 — + 80 3 — + 2 80 (4  – 1)(4  + 1) ——————— 16 1 — 40 Why is each of W 1, W 2, and W 3 an unbiased estimator of  ? Decide which of W 1, W 2, and W 3 is the best estimator of  ? (d) (e) We have seen that E(W 1 ) = E(W 2 ) = E(W 3 ) = . W 3 is the best estimator, since it has the smallest variance.


Download ppt "Section 6.1 Let X 1, X 2, …, X n be a random sample from a distribution described by p.m.f./p.d.f. f(x ;  ) where the value of  is unknown; then  is."

Similar presentations


Ads by Google