Virtual University of Pakistan Lecture No. 33 of the course on Statistics and Probability by Miss Saleha Naghmi Habibullah
IN THE LAST LECTURE, YOU LEARNT
TOPICS FOR TODAY Sampling Distribution of (continued) Sampling Distribution of Point Estimation Desirable Qualities of a Good Point Estimator Unbiasedness Consistency
We illustrate the real-life application of the sampling distribution of with the help of the following example:
EXAMPLE Car batteries produced by company A have a mean life of 4.3 years with a standard deviation of 0.6 years. A similar battery produced by company B has a mean life of 4.0 years and a standard deviation of 0.4 years.
What is the probability that a random sample of 49 batteries from company A will have a mean life of at least 0.5 years more than the mean life of a sample of 36 batteries from company B?
We are given the following data: Population A: SOLUTION We are given the following data: Population A: 1 = 4.3 years, 1 = 0.6 years, Sample size: n1 = 49 Population B: 2 = 4.0 years, 2 = 0.4 years, Sample size: n2 = 36
Both sample sizes (n1 = 49, n2 = 36) are large enough to assume that the sampling distribution of the differences is approximately a normal such that:
Mean: and standard deviation:
Thus the variable is approximately N (0, 1)
We are required to find the probability that the mean life of 49 batteries produced by company A will have a mean life of at least 0.5 years longer than the mean life of 36 batteries produced by company B, i.e.
We are required to find: to z-value, we find that:
0.3 0.5 1.84 Z
Hence, using the table of areas under normal curve, we find:
In other words, (given that the real difference between the mean lifetimes of batteries of company A and batteries of company B is 4.3 - 4.0 = 0.3 years), the probability that a sample of 49 batteries produced by company A will have a mean life of at least 0.5 years longer than the mean life of a sample of 36 batteries produced by company B, is only 3.3%.
Next, we consider the Sampling Distribution of the Differences between Proportions.
Suppose there are two binomial populations with proportions of successes p1 and p2 respectively.
We illustrate the sampling distribution of with the help of the following example:
EXAMPLE: It is claimed that 30% of the households in Community A and 20% of the households in Community B have at least one teenager. A simple random sample of 100 households from each community yields the following results:
What is the probability of observing a difference this large or larger if the claims are true?
In order to convert to Z, we need the values of as well as It can be mathematically proved that:
0.10 1.83 0.21 Z
By consulting the Area Table of the standard normal distribution, we find that the area between z = 0 and z = 1.83 is 0.4664. Hence, the area to the right of z = 1.83 is 0.0336. This probability is shown in following figure:
0.10 1.83 0.21 Z 0.4664 0.0336
Thus, if the claim is true, the probability of observing a difference as larger as or larger than the actually observed is only 0.0336 i.e. 3.36%.
The students are encouraged to try to interpret this result with reference to the situation at hand, as, in attempting to solve a statistical problem, it is very important not just to apply various formulae and obtain numerical results, but to interpret the results with reference to the problem under consideration.
Does the result indicate that at least one of the two claims is untrue, or does it imply something else?
Before we close the basic discussion regarding sampling distributions, we would like to draw the students’ attention to the following two important points:
1) We have discussed various sampling distributions with reference to the simplest technique of random sampling, i.e. simple random sampling. And, with reference to simple random sampling, it should be kept in mind that this technique of sampling is appropriate in that situation when the population is homogeneous.
2) Let us consider the reason why the standard deviation of the sampling distribution of any statistic is known as its standard error:
To answer this question, consider the fact that any statistic, considered as an estimate of the corresponding population parameter, should be as close in magnitude to the parameter as possible.
The difference between the value of the statistic and the value of the parameter can be regarded as an error --- and is called ‘sampling error’.
Geometrically, each one of these errors can be represented by horizontal line segment below the X-axis, as shown below:
Sampling Distribution of
The above diagram clearly indicates that there are various magnitudes of this error, depending on how far or how close the values of our statistic are in different samples.
The standard deviation of X gives us a ‘standard’ value of this error, and hence the term ‘Standard Error’.
Having presented the basic ideas regarding sampling distributions, we now begin the discussion regarding POINT ESTIMATION:
POINT ESTIMATION Point estimation of a population parameter provides as an estimate a single value calculated from the sample that is likely to be close in magnitude to the unknown parameter.
The difference between ‘Estimate’ and ‘Estimator’: An estimate is a numerical value of the unknown parameter obtained by applying a rule or a formula, called an estimator, to a sample X1, X2, …, Xn of size n, taken from a population.
In other words, an estimator stands for the rule or method that is used to estimate a parameter whereas an estimate stands for the numerical value obtained by substituting the sample observations in the rule or the formula.
For instance:
It is important to note that an estimator is always a statistic which is a function of the sample observations and hence is a random variable as the sample observations are likely to vary from sample to sample. In other words:
In repeated sampling, an estimator is a random variable, and has a probability distribution, which is known as its sampling distribution.
Having presented the basic definition of a point estimator, we now consider some desirable qualities of a good point estimator:
In this regard, the point to be understood is that a point estimator is considered a good estimator if it satisfies various criteria. Three of these criteria are:
DESIRABLE QUALITIES OF A GOOD POINT ESTIMATOR unbiasedness consistency efficiency
The concept of unbiasedness is explained below:
Let us illustrate the concept of unbiasedness by considering the example of the annual Ministry of Transport test that was presented in the last lecture:
EXAMPLE Let us examine the case of an annual Ministry of Transport test to which all cars, irrespective of age, have to be submitted. The test looks for faulty breaks, steering, lights and suspension, and it is discovered after the first year that approximately the same number of cars have 0, 1, 2, 3, or 4 faults.
The above situation is equivalent to the following:
If we let X denote the number of faults in a car, then X can take the values 0, 1, 2, 3, and 4, and the probability of each of these X values is 1/5.
Hence, we have the following probability distribution:
MEAN OF THE POPULATION DISTRIBUTION:
We are interested in considering the results that would be obtained if a sample of only two cars is tested.
The students will recall that we obtained 52 = 25 different possible samples, and, computing the mean of each possible sample, we obtained the following sampling distribution of X:
We computed the mean of this sampling distribution, and found that the mean of the sample means i.e. comes out to be equal to 2 --- exactly the same as the mean of the population !
We find that: i.e. the mean of the sampling distribution ofX is equal to the population mean.
By virtue of this property, we say that the sample mean is an UNBIASED estimate of the population mean.
It should be noted that this property, It should be noted that this property, always holds – regardless of the sample size.
Visual Representation of the Concept of Unbiasedness: implies that the distribution of is centered at .
What this means is that, although many of the individual sample means are either under-estimates or over-estimates of the true population mean, in the long run, the over-estimates balance the under-estimates so that the mean value of the sample means comes out to be equal to the population mean.
Let us now consider some other estimators which possess the desirable property of being unbiased:
The sample median is also an unbiased estimator of when the population is normally distributed (i.e. If X is normally distributed, then
Also, as far as p, the proportion of successes in the sample is concerned, we have:
Considering the binomial random variable X (which denotes the number of successes in n trials), we have: Hence, the sample proportion is an unbiased estimator of the population parameter p.
But
As far as the sample variance S2 is concerned, it can be mathematically proved that Hence, the sample variance S2 is a biased estimator of 2.
Since unbiasedness is a desirable quality, we would like the sample variance to be an unbiased estimator of 2. In order to achieve this end, the formula of the sample variance is modified as follows:
Modified formula for the sample variance: Since E(s2) = 2, hence s2 is an unbiased estimator of 2.
Why is unbiasedness consider a desirable property of an estimator? In order to obtain an answer to this question, consider the following:
With reference to the estimation of the population mean , we note that, in an actual study, the probability is very high that the mean of our sample i.e.X will either be less than or more than . Hence, in an actual study, we can never guarantee that our X will coincide with .
Unbiasedness implies that, although in an actual study, we cannot guarantee that our sample mean will coincide with , our estimation procedure (i.e. formula) is such that, in repeated sampling, the average value of our statistic will be equal to .
The next desirable quality of a good point estimator is consistency:
It should be noted that consistency is a large sample property.
Another point to be noted is that a consistent estimator may or may not be unbiased.
The sample proportion is also a consistent estimator of the parameter p of a population that has a binomial distribution.
The median is not a consistent estimator of when the population has a skewed distribution.
The sample variance though a biased estimator, is a consistent estimator of the population variance 2.
Generally speaking, it can be proved that a statistic whose STANDARD ERROR decreases with an increase in the sample size, will be consistent.
IN TODAY’S LECTURE, YOU LEARNT Sampling Distribution of (continued) Sampling Distribution of Point Estimation Desirable Qualities of a Good Point Estimator Unbiasedness Consistency
IN THE NEXT LECTURE, YOU WILL LEARN Desirable Qualities of a Good Point Estimator: Efficiency Methods of Point Estimation: The Method of Moments The Method of Least Squares The Method of Maximum Likelihood Interval Estimation: Confidence Interval for