Presentation on theme: "More about Tests! Remember, you are not proving or accepting the null hypothesis. Most of the time, the null means no difference or no change from the."— Presentation transcript:
More about Tests! Remember, you are not proving or accepting the null hypothesis. Most of the time, the null means no difference or no change from the previous parameter. If we reject the null, we are concluding that there has been a change or something affected the outcome. As a Statistician, you usually want to prove the alternative hypothesis. i.e. Think of a pill to help lose weight. The null hypothesis might be = 0 lbs lost and the alternative hypothesis would be > 0. The company that produces the pill would want to prove the alternative hypothesis to justify their claim that their pill helps people lose weight.
What does p-value mean? The p-value is the probability that a statistic or something more extreme occurs provided the null hypothesis is true. Therefore, if the p-value is very low ( =.05), we feel very comfortable in rejecting the null hypothesis. If we reject the null hypothesis, then we say the test is “significant at the level”.
Confidence Intervals and Hypothesis Tests A confidence interval can be used instead of a test. If the parameter (null hypothesis) is in the interval, then we would fail to reject the null hypothesis. 1 sided2 sided.051.641.96.012.332.58.0013.093.29
Making Errors on Hypothesis Tests There are two types of errors a statistician can make when doing tests: I. The null hypothesis is true, but we mistakenly reject it. II. The null hypothesis is false, but we fail to reject it. These two types of errors are called Type I and Type II errors.
Possible outcomes Type I ErrorOK Type II Error H o is True H o is false Reject H o Fail to Reject H o Outcome of test Reality
The probability of making a Type I error = . That is the probability of rejecting a true null hypothesis is . Why is that? The probability of making a Type II error = β. We will not have to calculate β, but we need to know a few facts about what will effect β. The probability of a Type II error can be lowered by: Having a larger Having a larger sample size the true parameter being significantly different than the null hypothesis. (The distance between the true parameter and the null hypothesis is called the effect size.)
Power of a test: The power of a test is the probability that a test correctly rejects a false null hypothesis. Power = 1 – β The Power of a test is increase by lowering β. therefore, the following will increase the power: Having a larger Having a larger sample size If the true parameter is significantly different than the null hypothesis (Effect Size is large)
Example: A bank wants to encourage more customers to make payments on delinquent balances by sending them a video tape urging them to set up a payment plan. Based on their results, the statistician for the bank found a 90% confidence interval for the success rate to be (0.29, 0.45). Their old send-a-letter method had worked 30% of the time. a)Is this evidence that the video worked better than the old method? Why or why not? b)What is a Type I error in the context of this situation? c)What is a Type II error in the context of this situation? d)If the video tape strategy really works well, actually getting 60% of the customers to set up a payment plan, would the power of the test be higher or lower compared to a 32% pay off rate? Explain.
Example: Soon after the Euro was introduced as currency in Europe, it was widely reported that someone spun a Euro coin 250 times and gotten heads 140 times. We wish to test a hypothesis about the fairness of spinning a Euro. a)Estimate the true proportion of heads using a 95% confidence interval. b)Does your confidence interval provide evidence that the Euro is unfair? Explain. c)What is the significane level of this test? d)What would a type I error be in context? e)What would a type II error be in context?