Presentation is loading. Please wait.

Presentation is loading. Please wait.

STATISTICAL INFERENCE

Similar presentations


Presentation on theme: "STATISTICAL INFERENCE"— Presentation transcript:

1 STATISTICAL INFERENCE
May be divided into two major areas PARAMETER ESTIMATION HYPOTHESIS TESTING POINT ESTIMATION The decision making procedure about the hypothesis INTERVAL ESTIMATION CONFIDENCE INTERVALS MEANS VARIANCES PROPORTIONS

2 POINT ESTIMATION A statistic used to estimate a population parameter  is called a point estimator for  and is denoted by . The numerical value assumed by this statistic when evaluated for a given sample is called a point estimate for . There is a difference in the terms : ESTIMATOR and ESTIMATE is the statistic used to generate the estimate ; it is a random variable is a number

3 We want, the estimator to generate estimates that can be expected to be close in value to .
We would like : to be UNBIASED for  to have a small variance for large sample sizes In general, If X is a random variable with probability distribution , characterized by the unknown parameter , and if X1, X2, Xn is a random sample of size n from X, then the statistic is called a point estimator of . note that is a random variable, because it is a function of random variable

4 Definition : A point estimate of some population parameter , is a single numerical value of a statistic Definition : The point estimator is an unbiased estimator for the parameter  if If the estimator is not unbiased, then the difference is called the biased of the estimator

5 VARIANCE AND MEAN SQUARE ERROR OF A POINT ESTIMATOR
A logical principle of estimation, when selecting among several estimator, is to chose the estimator that has minimum variance. Definition : If we consider all unbiased estimator of , the one with the smallest variance is called the minimum variance unbiased estimator (MVUE). Some times the MVUE is called the UMVUE, where the first U represents “Uniformly”, meaning “for all ”

6 MEAN SQUARE ERROR Definition : the mean square error of an- estimator of the parameter  is defined as : The mean square error can be rewritten as follows : The mean square error is an important criterion for comparing two estimators.

7 Let be two estimators of the parameter , and let be the mean square error of Then the relative efficiency of is defined as : If this relative efficiency is less than one, we would conclude that is more efficient estimator of  than Example : Suppose we wish to estimate the mean  of a population. We have a random sample of n observations X1, X2, …..Xn and we wish to compare two possible estimator for  : the sample mean and a single observation from the sample, say, Xi,

8 Note, both and Xi are unbiased estimators of  ; consequently, the MSE of both estimators is simply the variance. Since for sample size n ≥ 2, we would conclude that the sample mean is a better estimator of  than a single observation Xi.

9 EXERCISES 1. Suppose we have a random sample of size 2n from a population denoted by X, and E(X) =  and Var X = 2. Let be two estimator of . Which is the better estimator of  ? Explain your choice. 2. Let X1, X2, , X7 denote a random sample from a population having mean  and variance 2. Consider the following estimator of  :

10 (a) Is either estimator unbiased?
(b) Which estimator is “best” ? 3. Suppose that are estimators of . We know that Compare these three estimators. Which do you prefer? Why?

11 4. In a Binomial experiment exactly x successes are observed in n independent trials. The following two statistics are proposed as estimators of the proportion parameter Determine and compare the MSE for T1 and T2 5. Let X1, X2, X3 and X4 be a random sample of size 4 from a population whose distribution is exponential with unknown parameter .

12 a. Which of the following statistics are unbiased estimators of  ?
b. Among the unbiased estimators of , determine the one with the smallest variance


Download ppt "STATISTICAL INFERENCE"

Similar presentations


Ads by Google