Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 8, continued.... III. Interpretation of Confidence Intervals Remember, we don’t know the population mean. We take a sample to estimate µ, then.

Similar presentations


Presentation on theme: "Chapter 8, continued.... III. Interpretation of Confidence Intervals Remember, we don’t know the population mean. We take a sample to estimate µ, then."— Presentation transcript:

1 Chapter 8, continued...

2 III. Interpretation of Confidence Intervals Remember, we don’t know the population mean. We take a sample to estimate µ, then construct a confidence interval (CI) to provide some measure of accuracy for that estimate. An accurate interpretation for a 95% CI: “Before sampling, there is a 95% chance that the interval: will include µ.

3 More interpretation. In other words, if 100 samples are taken, each of size n, on average 95 of these intervals will contain µ. Important: this statement can only be made before we sample, when x-bar is still an undetermined random variable. After we sample, x-bar is no longer a random variable, thus there is no probability.

4 An example of interpretation. Suppose that the CJW company samples 100 customers and finds this month’s customer service mean is 82, with a population standard deviation of 20. We wish to construct a 95% confidence interval. Thus,  =.05 and z.025 =1.96.

5 Before vs. After sampling Before we sample, there is a 95% chance that µ will be in the interval: After sampling we create an interval: 82 ± 3.92, or (78.08 to 85.92). We can only say that under repeated sampling, 95% of similarly constructed intervals would contain the true µ. This one particular interval may or may not contain µ.

6 IV. Interval Estimate of µ: Small Sample A small sample is one in which n<30. If the population has a normal probability distribution, we can use the following methods. However, if you can’t assume the normal population, you must increase n  30 so the Central Limit Theorem can be invoked.

7 A. The t-distribution William Sealy GossetWilliam Sealy Gosset (“student”) founded the t- distribution. An Oxford graduate in math and chemistry, he worked for Guinness Brewing in Dublin and developed a new small-sample theory of statistics while working on small-scale materials and temperature experiments. “The probable error of a mean” was published in 1908, but it wasn’t until 1925 when Sir Ronald A. Fisher called attention to it and its many applications.Sir Ronald A. Fisher

8 The idea behind the t. Specific t-distributions are associated with a different degree of freedom. Degree of freedom: the # of observations allowed to vary in calculating a statistic = n-1. As the degrees of freedom increase (n  ), the closer the t-distribution gets to the standard normal distribution.

9 B. An Example. Suppose n=20 and you are constructing a 99% (  =.01) confidence interval. First we need to be able to read a t-table to find t.005. See Table 8.3 in the text.

10 The t-table. 0  /2 t  /2 We need to find t.005 with 19 degrees of freedom in a t- table like Table 8.3. I see where you’re going!

11 Our Example How do I get back to the brewery?


Download ppt "Chapter 8, continued.... III. Interpretation of Confidence Intervals Remember, we don’t know the population mean. We take a sample to estimate µ, then."

Similar presentations


Ads by Google