Presentation is loading. Please wait.

Presentation is loading. Please wait.

Testing Hypotheses About Proportions

Similar presentations


Presentation on theme: "Testing Hypotheses About Proportions"— Presentation transcript:

1 Testing Hypotheses About Proportions
Unit 4 Testing Hypotheses About Proportions Copyright © 2009 Pearson Education, Inc.

2 Aspects of a Hypothesis Test
Population(s) Parameter(s): symbols and words Type of Test Conditions Hypothesese Calculations/Formula/P-Value Conclusion

3 Trials in the United States
What is it we assume (theoretically) in all trials in the United States? If the evidence is not strong enough to reject the presumption of innocent, the jury returns with a verdict of “not guilty.” The jury does not say that the defendant is innocent. All it says is that there is not enough evidence to convict, to reject innocence. The defendant may, in fact, be innocent, but the jury has no way to be sure.

4 What to Do with an “Innocent” Defendant (cont.)
Said statistically, we will fail to reject the null hypothesis. We never declare the null hypothesis to be true, because we simply do not know whether it’s true or not.

5 A Trial as a Hypothesis Test
Think about the logic of jury trials: To prove someone is guilty, we start by assuming they are innocent. We retain that hypothesis until the facts make it unlikely beyond a reasonable doubt. Then, and only then, we reject the hypothesis of innocence and declare the person guilty.

6 Hypotheses Our starting hypothesis is called the null hypothesis.
The null hypothesis, that we denote by H0, specifies a population model parameter of interest and proposes a value for that parameter. We write the null hypothesis in the form H0: parameter = hypothesized value. The alternative hypothesis, which we denote by HA, contains the values of the parameter that we consider plausible when we reject the null hypothesis.

7 Alternative Alternatives
There are three possible alternative hypotheses: HA: parameter < hypothesized value HA: parameter ≠ hypothesized value HA: parameter > hypothesized value

8 Testing Hypotheses The null hypothesis, specifies a population model parameter of interest and proposes a value for that parameter. We might have, for example, H0: p = 0.20 We want to compare our data to what we would expect given that H0 is true. We can do this by finding out how many standard deviations away from the proposed value we are. We then ask how likely it is to get results like we did if the null hypothesis were true. Your null hypothesis never has a “statistic” symbol in in.

9 P-Values We can use the model proposed by our hypothesis to calculate the probability that the event we’ve witnessed could happen, which quantifies exactly how surprised we are to see our results. This probability is called a P-value. Definition: The P-Value is the probability of witnessing a statistic that extreme given the null hypothesis is true.

10 Conclusion using P-Values
When the data are consistent with the model from the null hypothesis, the P-value is high and we are unable to reject the null hypothesis. We can’t claim to have proved it; instead we “fail to reject the null hypothesis” If the P-value is low enough, we’ll “reject the null hypothesis,” since what we observed would be very unlikely were the null model true. As a general rule of thumb we use 5% as the cut off for the p-value. Unless otherwise stated. The 5% is know as the alpha level or level of significance.

11 The Steps of Hypothesis Testing (cont.)
Population: State where each sample of data comes from. Parameter: What does the data from the sample(s) represent. (p or mu) Include Units if quantitative. Type of Test: Name the test (Several choices to be discussed) Conditions: (Same as Confidence Intervals) Hypothesis: Null and Alternative Calculations and Formulas: STAT Crunch Conclusion:

12 One-Proportion z-Test
The conditions for the one-proportion z-test are the same as for the one proportion z-interval. We test the hypothesis H0: p = p0 using the statistic where When the conditions are met and the null hypothesis is true, this statistic follows the standard Normal model, so we can use that model to obtain a P-value.

13 The Reasoning of Hypothesis Testing (cont.)
Mechanics Under “mechanics” we place the actual calculation of our test statistic from the data. Different tests will have different formulas and different test statistics. Usually, the mechanics are handled by a statistics program or calculator, but it’s good to know the formulas.

14 The Reasoning of Hypothesis Testing (cont.)
Mechanics The ultimate goal of the calculation is to obtain a P-value. The P-value is the probability that the observed statistic value (or an even more extreme value) could occur if the null model were correct. If the P-value is small enough, we’ll reject the null hypothesis. Note: The P-value is a conditional probability—it’s the probability that the observed results could have happened if the null hypothesis is true.

15 The Reasoning of Hypothesis Testing (cont.)
Conclusion The conclusion in a hypothesis test is always a statement about the null hypothesis. The conclusion must state either that we reject or that we fail to reject the null hypothesis. And, as always, the conclusion should be stated in context.

16 Alternative Alternatives (cont.)
HA: parameter ≠ value is known as a two-sided alternative because we are equally interested in deviations on either side of the null hypothesis value. For two-sided alternatives, the P-value is the probability of deviating in either direction from the null hypothesis value.

17 Alternative Alternatives (cont.)
The other two alternative hypotheses are called one-sided alternatives. A one-sided alternative focuses on deviations from the null hypothesis value in only one direction. Thus, the P-value for one-sided alternatives is the probability of deviating only in the direction of the alternative away from the null hypothesis value.

18 P-Values and Decisions: What to Tell About a Hypothesis Test
How small should the P-value be in order for you to reject the null hypothesis? It turns out that our decision criterion is context-dependent. When we’re screening for a disease and want to be sure we treat all those who are sick, we may be willing to reject the null hypothesis of no disease with a fairly large P-value. A longstanding hypothesis, believed by many to be true, needs stronger evidence (and a correspondingly small P-value) to reject it. Another factor in choosing a P-value is the importance of the issue being tested.

19 P-Values and Decisions (cont.)
Your conclusion about any null hypothesis should be accompanied by the P-value of the test. If possible, it should also include a confidence interval for the parameter of interest. Don’t just declare the null hypothesis rejected or not rejected. Report the P-value to show the strength of the evidence against the hypothesis. This will let each reader decide whether or not to reject the null hypothesis.

20 What Can Go Wrong? (cont.)
Don’t base your null hypothesis on what you see in the data. Think about the situation you are investigating and develop your null hypothesis appropriately. Don’t base your alternative hypothesis on the data, either. Again, you need to Think about the situation.

21 What Can Go Wrong? (cont.)
Don’t make your null hypothesis what you want to show to be true. You can reject the null hypothesis, but you can never “accept” or “prove” the null. Don’t forget to check the conditions. We need randomization, independence, and a sample that is large enough to justify the use of the Normal model. If you fail to reject the null hypothesis, don’t think a bigger sample would be more likely to lead to rejection. Each sample is different, and a larger sample won’t necessarily duplicate your current observations.

22 What have we learned? We can use what we see in a random sample to test a particular hypothesis about the world. Hypothesis testing complements our use of confidence intervals. Testing a hypothesis involves proposing a model, and seeing whether the data we observe are consistent with that model or so unusual that we must reject it. We do this by finding a P-value—the probability that data like ours could have occurred if the model is correct.

23 What have we learned? (cont.)
We’ve learned the process of hypothesis testing, from developing the hypotheses to stating our conclusion in the context of the original question. We know that confidence intervals and hypothesis tests go hand in hand in helping us think about models. A hypothesis test makes a yes/no decision about the plausibility of a parameter value. A confidence interval shows us the range of plausible values for the parameter.

24 Zero In on the Null Null hypotheses have special requirements.
To perform a hypothesis test, the null must be a statement about the value of a parameter for a model. We then use this value to compute the probability that the observed sample statistic—or something even farther from the null value—will occur.

25 Zero In on the Null (cont.)
How do we choose the null hypothesis? The appropriate null arises directly from the context of the problem—it is not dictated by the data, but instead by the situation. A good way to identify both the null and alternative hypotheses is to think about the Why of the situation. To write a null hypothesis, you can’t just choose any parameter value you like. The null must relate to the question at hand—it is context dependent.

26 Zero In on the Null (cont.)
There is a temptation to state your claim as the null hypothesis. However, you cannot prove a null hypothesis true. So, it makes more sense to use what you want to show as the alternative. This way, when you reject the null, you are left with what you want to show.

27 How to Think About P-Values
A P-value is a conditional probability—the probability of the observed statistic given that the null hypothesis is true. The P-value is NOT the probability that the null hypothesis is true. It’s not even the conditional probability that null hypothesis is true given the data. Be careful to interpret the P-value correctly.

28 What to Do with a High P-Value
When we see a small P-value, we could continue to believe the null hypothesis and conclude that we just witnessed a rare event. But instead, we trust the data and use it as evidence to reject the null hypothesis. However big P-values just mean what we observed isn’t surprising. That is, the results are now in line with our assumption that the null hypothesis models the world, so we have no reason to reject it. A big P-value doesn’t prove that the null hypothesis is true, but it certainly offers no evidence that it is not true. Thus, when we see a large P-value, all we can say is that we “don’t reject the null hypothesis.”

29 Alpha Levels Sometimes we need to make a firm decision about whether or not to reject the null hypothesis. When the P-value is small, it tells us that our data are rare given the null hypothesis. How rare is “rare”?

30 Alpha Levels (cont.) We can define “rare event” arbitrarily by setting a threshold for our P-value. If our P-value falls below that point, we’ll reject H0. We call such results statistically significant. The threshold is called an alpha level, denoted by .

31 Alpha Levels (cont.) Common alpha levels are 0.10, 0.05, and 0.01.
You have the option—almost the obligation—to consider your alpha level carefully and choose an appropriate one for the situation. The alpha level is also called the significance level. When we reject the null hypothesis, we say that the test is “significant at that level.”

32 Alpha Levels (cont.) What can you say if the P-value does not fall below ? You should say that “The data have failed to provide sufficient evidence to reject the null hypothesis.” Don’t say that you “accept the null hypothesis.”

33 Alpha Levels (cont.) The P-value gives the reader far more information than just stating that you reject or fail to reject the null. In fact, by providing a P-value to the reader, you allow that person to make his or her own decisions about the test. What you consider to be statistically significant might not be the same as what someone else considers statistically significant. There is more than one alpha level that can be used, but each test will give only one P-value.

34 What Not to Say About Significance
What do we mean when we say that a test is statistically significant? All we mean is that the test statistic had a P-value lower than our alpha level. Don’t be lulled into thinking that statistical significance carries with it any sense of practical importance or impact.

35 What Not to Say About Significance (cont.)
For large samples, even small, unimportant (“insignificant”) deviations from the null hypothesis can be statistically significant. On the other hand, if the sample is not large enough, even large, financially or scientifically “significant” differences may not be statistically significant. It’s good practice to report the magnitude of the difference between the observed statistic value and the null hypothesis value (in the data units) along with the P-value on which we base statistical significance.

36 Making Errors Here’s some shocking news for you: nobody’s perfect. Even with lots of evidence we can still make the wrong decision. When we perform a hypothesis test, we can make mistakes in two ways: The null hypothesis is true, but we mistakenly reject it. (Type I error) The null hypothesis is false, but we fail to reject it. (Type II error)

37 Making Errors (cont.) Which type of error is more serious depends on the situation at hand. In other words, the gravity of the error is context dependent. Here’s an illustration of the four situations in a hypothesis test:

38 Comparing Two Proportions
Chapter 22 Comparing Two Proportions Copyright © 2009 Pearson Education, Inc.

39 Comparing Two Proportions
Comparisons between two percentages are much more common than questions about isolated percentages. And they are more interesting. We often want to know how two groups differ, whether a treatment is better than a placebo control, or whether this year’s results are better than last year’s.

40 Another Ruler In order to examine the difference between two proportions, we need another ruler—the standard deviation of the sampling distribution model for the difference between two proportions. Recall that standard deviations don’t add, but variances do. In fact, the variance of the sum or difference of two independent random variables is the sum of their individual variances.

41 The Standard Deviation of the Difference Between Two Proportions
Proportions observed in independent random samples are independent. Thus, we can add their variances. So… The standard deviation of the difference between two sample proportions is Thus, the standard error is

42 Assumptions and Conditions
Independence Assumptions: Randomization Condition: The data in each group should be drawn independently and at random from a homogeneous population or generated by a randomized comparative experiment. The 10% Condition: If the data are sampled without replacement, the sample should not exceed 10% of the population. Independent Groups Assumption: The two groups we’re comparing must be independent of each other.

43 Assumptions and Conditions (cont.)
Sample Size Condition: Each of the groups must be big enough… Success/Failure Condition: Both groups are big enough that at least 10 successes and at least 10 failures have been observed in each.

44 The Sampling Distribution
We already know that for large enough samples, each of our proportions has an approximately Normal sampling distribution. The same is true of their difference.

45 The Sampling Distribution (cont.)
Provided that the sampled values are independent, the samples are independent, and the samples sizes are large enough, the sampling distribution of is modeled by a Normal model with Mean: Standard deviation:

46 Two-Proportion z-Interval
When the conditions are met, we are ready to find the confidence interval for the difference of two proportions: The confidence interval is where The critical value z* depends on the particular confidence level, C, that you specify.

47 Everyone into the Pool The typical hypothesis test for the difference in two proportions is the one of no difference. In symbols, H0: p1 – p2 = 0. Since we are hypothesizing that there is no difference between the two proportions, that means that the standard deviations for each proportion are the same. Since this is the case, we combine (pool) the counts to get one overall proportion.

48 What Can Go Wrong? Don’t use two-sample proportion methods when the samples aren’t independent. These methods give wrong answers when the independence assumption is violated. Don’t apply inference methods when there was no randomization. Our data must come from representative random samples or from a properly randomized experiment. Don’t interpret a significant difference in proportions causally. Be careful not to jump to conclusions about causality.

49 What have we learned? We’ve now looked at inference for the difference in two proportions. Perhaps the most important thing to remember is that the concepts and interpretations are essentially the same—only the mechanics have changed slightly.

50 What have we learned? Hypothesis tests and confidence intervals for the difference in two proportions are based on Normal models. Both require us to find the standard error of the difference in two proportions. We do that by adding the variances of the two sample proportions, assuming our two groups are independent. When we test a hypothesis that the two proportions are equal, we pool the sample data; for confidence intervals we don’t pool.


Download ppt "Testing Hypotheses About Proportions"

Similar presentations


Ads by Google