Download presentation

Presentation is loading. Please wait.

1
**Chapter 10 Introduction to Inference**

Will Freeman

2
**10.1: Estimating with Confidence**

Statistical confidence: a confidence interval is a range that the unknown mean will fall within for a certain % of samples. (confidence level) The confidence level is the probability that our method will give a correct answer

3
10.1 A 90% confidence level includes 90% of the normal sampling distribution of the mean of our sample

4
Big Candy

5
10.1 To calculate a confidence interval, look up the z-score (z*) from table C. multiply this by the standard deviation of the sampling distribution. The mean plus or minus this value is the confidence interval

6
10.1 Margin of error: The decrease the margin of error 1 of 3 things must happen Decrease the confidence level The standard deviation of the data decreases Use a larger sample These methods will also increase the power of a test (we’ll get to this later)

7
10.1 Choosing a sample size If you wish to have a certain level of confidence and a certain margin of error, you must figure out how large of a sample to use Where m is the margin of error If you get a decimal ALWAYS round up. Even if you get

8
10.1 Cautions: The sample you use must be normal. If there are outliers or skewness, none of this stuff works. Be careful that the sample is large enough and is normal The sample must be an SRS You must know the standard deviation of the population (usually unrealistic) The population does not have to be normal

9
**10.2: Tests of Significance**

Significance tests help us determine the validity of claims. For example, if somebody claims to make 80% of free throws we can take a sample of them shooting free throws and determine how likely it is that their 80% claim is true

10
10.2 To make this test we need a null hypothesis. The null hypothesis assumes that the claim is true; so for our example the null hypothesis would be free throws made = 80% The alternative hypothesis is the hypothesis that the null hypothesis is not true. It could be either free throws made = 80%, <80% or >80%

11
10.2 Say the alternative hypothesis is free throws made < 80%. If we take a sample and find that the person made 12 of 20. We unrealistically know that the standard deviation is 5

12
Big Candy

13
10.2 The magnitude of the z-score of is much larger than the z-score of for a .05 significance level, so we can reject the null hypothesis. The other way to know is to use a z-test on the calc. enter the population standard deviation, n, H naught and the sample mean. Then select the correct alternative hypothesis and push enter. If the p-value it gives you (probability that this sample mean was obtained purely by chance) is less than the significance level you chose, then the null hypothesis can be rejected. For our example the p-value is , which is much less than .05

14
10.2 Be careful to pay attention to whether you are using a one sided or two sided test. For a 2-sided use the confidence level at the bottom of the chart, for 1-sided use the decimal at the top of the chart.

15
Time for a break… This probably hasn’t made much sense so far. Hopefully it has… we’re only halfway there

16
Even more break Big candy helps the mind make sense of all these words

17
**10.3: Making sense of all this**

You have to remember what stat is for: real-life use. Sometimes something that has statistical significance has no practical significance. You always have to factor in what the costs are of proving your data. If the null hypothesis is false, could you lose money? Are people so sure of the null hypothesis that you need a very high confidence level to convince them it is wrong?

18
10.3 For example, you have a null hypothesis that it takes 100 days for a broken bone to heal. With a new miracle treatment you find with 95% certainty that bones heal in 98 days. This treatment costs $17,324. the results may be statistically significant, but in reality 2 days isn’t worth that much money. So who really cares.

19
Big Candy

20
10.3 Be careful that you are using a valid, normal sample when using the inference toolbox (p. 571). Otherwise all this time you just wasted means nothing

21
**10.4: Inference as Decision**

There are two ways we can screw up. The first is that we reject the null hypothesis when it is actually true, and the second is that we accept the null hypothesis when it is actually false. Here is a very artistic diagram to illustrate this exceedingly complicated concept

22
**10.4. I don’t know what I’m talking about**

The probability that you get a type I error is just the significance level. That part is pretty easy. Type II is the part that is mildly perplexing

23
10.4 There are a couple of steps for finding the probability of a type II error: Calculate confidence interval Calculate Type II error using alternative mean value

24
Big Candy

25
10.4 The answer is the probability of a type II error. 1-(p of Type II error) is the power of the test. Anything above 80% is good.

26
Help I’m confused. This may have made sense, but possibly not. So have fun.

27
Big Candy

Similar presentations

Presentation is loading. Please wait....

OK

Decision Errors and Power

Decision Errors and Power

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on cross docking meaning Ppt on central nervous system Ppt on history of science and technology Ppt on classical economics theories Ppt on time response analysis control Ppt on aircraft landing gear system information Ppt on shell scripting basics Ppt on supply chain management of dell Economics ppt on demand and supply Ppt on blood donation in india