## Presentation on theme: "Copyright © 2010 Pearson Education, Inc. Slide 21 - 1."— Presentation transcript:

Copyright © 2010 Pearson Education, Inc. Slide 21 - 2 Solution: C

Copyright © 2010 Pearson Education, Inc. Slide 21 - 4 Null Hypothesis To perform a hypothesis test, the null must be a statement about the value of a parameter for a model. We then use this value to compute the probability that the observed sample statisticor something even farther from the null valuemight occur. You cannot prove a null hypothesis true. Use what you want to show as the alternative.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 5 Example: The diabetes drug Avandia was approved to treat type II diabetes in 1999. But in 2007 an article in the New England Journal of Medicine raised concerns that the drug might carry an increased risk of heart attack. This study combined results from a number of other separate studies to obtain an overall sample of 4485 diabetes patients taking Avandia. People with Type 2 diabetes are known to have about a 20.2% chance of suffering a heart attack within a seven-year period. According to the articles author, the risk found in the NEJM study was equivalent to a 28.9% chance of heart attack over a 7 years. The FDA is the government agency responsible for relabeling Avandia to warn of the risk if it is judged to be unsafe. Although the statistical methods they used are more sophisticated, we can get an idea of their reasoning with the tools we have learned. What null hypothesis and alternative hypothesis about 7-yr heart attack risk would you test? Explain.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 6 How to Think About P-Values A P-value is a conditional probability P(observed statistic given that the null hypothesis is true). The P-value is NOT the probability that the null hypothesis is true. Its not even the conditional probability that null hypothesis is true given the data.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 7 Example: A NEJM paper reported that the 7-yr risk of heart attack in diabetes patients taking the drub Avandia was increased from the baseline of 20.2% to an estimated 28.9% and said the P- value was.03. How should the P-value be interpreted in this context?

Copyright © 2010 Pearson Education, Inc. Slide 21 - 8 What to Do with a High P-Value When we see a small P-value, we could continue to believe the null hypothesis and conclude that we just witnessed a rare event. But instead, we trust the data and use it as evidence to reject the null hypothesis. However big P-values just mean what we observed isnt surprising. That is, the results are now in line with our assumption that the null hypothesis models the world, so we have no reason to reject it. A big P-value doesnt prove that the null hypothesis is true, but it certainly offers no evidence that it is not true. Thus, when we see a large P-value, all we can say is that we dont reject the null hypothesis.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 9 Example: The question of whether Avandia increased the risk of heart attack was raised by a study in the NEJM. This study estimated the seven-year risk of heart attack to be 28.9% and reported a P-value of.03 for a test of whether this risk was higher than the baseline seven-year risk of 20.2%. An earlier study had estimated the 7-yr risk to be 26.9% and reported a P-value of 0.27. Why did the researchers in the earlier study not express alarm about the increased risk they had seen?

Copyright © 2010 Pearson Education, Inc. Slide 21 - 10 Alpha Levels Sometimes we need to make a firm decision about whether or not to reject the null hypothesis. When the P-value is small, it tells us that our data are rare given the null hypothesis. We can define rare event arbitrarily by setting a threshold for our P-value. If our P-value falls below that point, well reject H 0. We call such results statistically significant. The threshold is called an alpha level, denoted by.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 11 Alpha Levels (cont.) Common alpha (also called the significance level) levels are 0.10, 0.05, and 0.01. Consider your alpha level carefully and choose an appropriate one for the situation. When we reject the null hypothesis, we say that the test is significant at that level. What can you say if the P-value does not fall below ? You should say that The data have failed to provide sufficient evidence to reject the null hypothesis. Dont say that you accept the null hypothesis. In a jury trial, if we do not find the defendant guilty, we say the defendant is not guiltywe dont say that the defendant is innocent.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 12 Alpha Levels (cont.) The P-value gives the reader far more information than just stating that you reject or fail to reject the null. In fact, by providing a P-value to the reader, you allow that person to make his or her own decisions about the test. What you consider to be statistically significant might not be the same as what someone else considers statistically significant. There is more than one alpha level that can be used, but each test will give only one P-value.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 13 Significant vs. Important What do we mean when we say that a test is statistically significant? All we mean is that the test statistic had a P-value lower than our alpha level. Dont be lulled into thinking that statistical significance carries with it any sense of practical importance or impact.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 14 Significant vs. Important (cont.) For large samples, even small, unimportant (insignificant) deviations from the null hypothesis can be statistically significant. On the other hand, if the sample is not large enough, even large, financially or scientifically significant differences may not be statistically significant. Its good practice to report the magnitude of the difference between the observed statistic value and the null hypothesis value (in the data units) along with the P-value on which we base statistical significance.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 15 Confidence Intervals and Hypothesis Tests Confidence intervals and hypothesis tests are built from the same calculations. They have the same assumptions and conditions. You can approximate a hypothesis test by examining a confidence interval. Just ask whether the null hypothesis value is consistent with a confidence interval for the parameter at the corresponding confidence level.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 16 Confidence Intervals and Hypothesis Tests (cont.) Because confidence intervals are two-sided, they correspond to two-sided tests. In general, a confidence interval with a confidence level of C% corresponds to a two-sided hypothesis test with an -level of 100 – C%. The relationship between confidence intervals and one- sided hypothesis tests is a little more complicated. A confidence interval with a confidence level of C% corresponds to a one-sided hypothesis test with an - level of ½(100 – C)%.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 17 Example: The baseline 7-yr risk of heart attacks from diabetics is 20.2%. In 2007 a NEJM study reported a 95% confidence interval equivalent to 20.8% to 40.0% for the risk among patients taking Avandia. What did this confidence interval suggest to the FDA about the safety of the drug?

Copyright © 2010 Pearson Education, Inc. Slide 21 - 18 *A 95% Confidence Interval for Small Samples When the Success/Failure Condition fails, all is not lost. Add four phony observations, two successes and two failures. So instead of we use the adjusted proportion

Copyright © 2010 Pearson Education, Inc. Slide 21 - 19 *A Better Confidence Interval for Proportions (cont.) Now the adjusted interval is The adjusted form gives better performance overall and works much better for proportions of 0 or 1. It has the additional advantage that we no longer need to check the Success/Failure Condition.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 20 Example: Surgeons claimed their results to compare two methods for a surgical procedure used to alleviate pain on the outside of the wrist. A new method was compared with the traditional freehand method procedure. Of 45 operations using the freehand method, three were unsuccessful, for a failure rate of 6.7%. With only 3 failures, the data dont satisfy the success/failure condition, so we cant use a a standard confidence interval. What is the confidence interval using the plus-four method?

Copyright © 2010 Pearson Education, Inc. Slide 21 - 21 Making Errors When we perform a hypothesis test, we can make mistakes in two ways: I. The null hypothesis is true, but we mistakenly reject it. (Type I error) II. The null hypothesis is false, but we fail to reject it. (Type II error)

Copyright © 2010 Pearson Education, Inc. Slide 21 - 22 Making Errors (cont.) Which type of error is more serious depends on the situation at hand. In other words, the gravity of the error is context dependent. Heres an illustration of the four situations in a hypothesis test:

Copyright © 2010 Pearson Education, Inc. Slide 21 - 23 Making Errors (cont.) How often will a Type I error occur? Since a Type I error is rejecting a true null hypothesis, the probability of a Type I error is our level. When H 0 is false and we reject it, we have done the right thing. A tests ability to detect a false hypothesis is called the power of the test.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 24 Making Errors (cont.) When H 0 is false and we fail to reject it, we have made a Type II error. We assign the letter to the probability of this mistake. Its harder to assess the value of because we dont know what the value of the parameter really is. There is no single value for --we can think of a whole collection of s, one for each incorrect parameter value.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 25 Making Errors (cont.) One way to focus our attention on a particular is to think about the effect size. Ask How big a difference would matter? We could reduce for all alternative parameter values by increasing. This would reduce but increase the chance of a Type I error. This tension between Type I and Type II errors is inevitable. The only way to reduce both types of errors is to collect more data. Otherwise, we just wind up trading off one kind of error against the other.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 26 Example: Back to Avandia. The issue of the NEJM in which that study appeared also included an editorial that said, in part, A few events either way might have changed the findings for myocardial infarction or for deal from cardiovascular causes. In this setting, the possibility that the findings were due to chance cannot be excluded. What kind of error would the researchers have made if, in fact, their findings were due to chance? What could be the consequences of this error?

Copyright © 2010 Pearson Education, Inc. Slide 21 - 27 Power The power of a test is the probability that it correctly rejects a false null hypothesis. When the power is high, we can be confident that weve looked hard enough at the situation. The power of a test is 1 –

Copyright © 2010 Pearson Education, Inc. Slide 21 - 28 Power (cont.) Whenever a study fails to reject its null hypothesis, the tests power comes into question. When we calculate power, we imagine that the null hypothesis is false. The value of the power depends on how far the truth lies from the null hypothesis value. The distance between the null hypothesis value, p 0, and the truth, p, is called the effect size. Power depends directly on effect size.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 29 Example: The study of Avandia combined results from 47 different trials, a method called meta- analysis. The drugs manufacturer issued a statement that pointed out, Each study is designed differently and looks at unique questions. By combining data from many studies, meta-analyses can achieve a much larger sample size. How could this larger sample size help?

Copyright © 2010 Pearson Education, Inc. Slide 21 - 30 A Picture Worth Words The larger the effect size, the easier it should be to see it. Obtaining a larger sample size decreases the probability of a Type II error, so it increases the power. It also makes sense that the more were willing to accept a Type I error, the less likely we will be to make a Type II error.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 31 A Picture Worth Words (cont.) This diagram shows the relationship between these concepts:

Copyright © 2010 Pearson Education, Inc. Slide 21 - 32 Reducing Both Type I and Type II Error The previous figure seems to show that if we reduce Type I error, we must automatically increase Type II error. But, we can reduce both types of error by making both curves narrower. How do we make the curves narrower? Increase the sample size.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 33 Reducing Both Type I and Type II Error (cont.) This figure has means that are just as far apart as in the previous figure, but the sample sizes are larger, the standard deviations are smaller, and the error rates are reduced:

Copyright © 2010 Pearson Education, Inc. Slide 21 - 34 Reducing Both Type I and Type II Error (cont.) Original comparison of errors: Comparison of errors with a larger sample size:

Copyright © 2010 Pearson Education, Inc. Slide 21 - 35 What Can Go Wrong? Dont interpret the P-value as the probability that H 0 is true. The P-value is about the data, not the hypothesis. Its the probability of observing data this unusual, given that H 0 is true, not the other way around. Dont believe too strongly in arbitrary alpha levels. Its better to report your P-value and a confidence interval so that the reader can make her/his own decision.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 36 Example: More Avandia … The drug manufacturer pointed out in their rebuttal, Data from the earlier clinical trial did show a small increase in reports of myocardial infarction among the Avandia-treated group…however, the number of events is too small to reach a reliable conclusion about the role of the medicines may have played in this finding. Why would this smaller study have been less likely to detect the difference in risk? What are the appropriate statistical concepts for comparing the smaller studies?

Copyright © 2010 Pearson Education, Inc. Slide 21 - 37 What have we learned? And, weve learned about the two kinds of errors we might make and seen why in the end were never sure weve made the right decision. Type I error, reject a true null hypothesis, probability is. Type II error, fail to reject a false null hypothesis, probability is. Power is the probability we reject a null hypothesis when it is false, 1 –. Larger sample size increase power and reduces the chances of both kinds of errors.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 38 Example: A bank wondered if it could hget more customers to make payments on delinquent balances by sending them a DVD urging them to set up a payment plan. The bank tested their strategy. A 90% confidence interval for the success rate is (0.29, 0.45). Their old send-a-letter method had worked 30% of the time. Can you reject the null hypothesis that the proportion is still 30% at alpha = 0.05? Example: Given the confidence interval the bank found in their trial of DVDs, what would you recommend that they do? Should they scrap the DVD strategy? Example: Explain what a type I error is in this context and what the consequences would be to the bank. Example: What is a type II error in the experiment and what would its consequences be? Example: For the bank, which situation has the higher power: a strategy that works really well, actually getting 60% of people to pay off their balances, or a strategy that barely increases the payoff rate to 32%? Explain.

Copyright © 2010 Pearson Education, Inc. Slide 21 - 39 Homework: Pg. 499 1-15 odd Day 2 – Pg. 499 17 - 33a,b