Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analyze Phase Introduction to Hypothesis Testing

Similar presentations


Presentation on theme: "Analyze Phase Introduction to Hypothesis Testing"— Presentation transcript:

1 Analyze Phase Introduction to Hypothesis Testing
Now we will continue in the Analyze Phase with “Introduction to Hypothesis Testing”.

2 Hypothesis Testing (ND)
Welcome to Analyze “X” Sifting Inferential Statistics Hypothesis Testing Purpose Tests for Central Tendency Intro to Hypothesis Testing Tests for Variance Hypothesis Testing ND P1 ANOVA Hypothesis Testing ND P2 Hypothesis Testing NND P1 Hypothesis Testing NND P2 The core fundamentals of this phase are Hypothesis Testing, Tests for Central Tendency, Tests for Variance and ANOVA. Wrap Up & Action Items

3 Six Sigma Goals and Hypothesis Testing
Our goal is to improve our Process Capability, this translates to the need to move the process Mean (or proportion) and reduce the Standard Deviation. Because it is too expensive or too impractical (not to mention theoretically impossible) to collect population data, we will make decisions based on sample data. Because we are dealing with sample data, there is some uncertainty about the true population parameters. Hypothesis Testing helps us make fact-based decisions about whether there are different population parameters or that the differences are just due to expected sample variation. Please read the slide.

4 Purpose of Hypothesis Testing
The purpose of appropriate Hypothesis Testing is to integrate the Voice of the Process with the Voice of the Business to make data-based decisions to resolve problems. Hypothesis Testing can help avoid high costs of experimental efforts by using existing data. This can be likened to: Local store costs versus mini bar expenses. There may be a need to eventually use experimentation, but careful data analysis can indicate a direction for experimentation if necessary. The probability of occurrence is based on a pre-determined statistical confidence. Decisions are based on: Beliefs (past experience) Preferences (current needs) Evidence (statistical data) Risk (acceptable level of failure) Please read the slide.

5 The Basic Concept for Hypothesis Tests
Recall from the discussion on classes and cause of distributions that a data set may seem Normal, yet still be made up of multiple distributions. Hypothesis Testing can help establish a statistical difference between factors from different distributions. Because of not having the capability to test an entire population, having to use a sample is the closest we can get to the population. Since we are using sample data and not the entire population we need to have methods what will allow us to infer the sample if a fair representation of then population. When we use a proper sample size, Hypothesis Testing gives us a way to detect the likelihood that a sample came from a particular distribution. Sometimes the questions can be: Did our sample come from a population with a Mean of 100? Is our sample variance significantly different than the variance of the population? Is it different from a target? Did my sample come from this population? Or this? Or this?

6 Significant Difference
Are the two distributions “significantly” different from each other? How sure are we of our decision? How do the number of observations affect our confidence in detecting population Mean?   Sample 1 Sample 2 Do you see a difference between Sample 1 and Sample 2? There may be a real difference between the samples shown; however, we may not be able to determine a statistical difference. Our confidence is established statistically which has an effect on the necessary sample size. Our ability to detect a difference is directly linked to sample size and in turn whether we practically care about such a small difference.

7 Detecting Significance
Statistics provide a methodology to detect differences. Examples might include differences in suppliers, shifts or equipment. Two types of significant differences occur and must be well understood, practical and statistical. Failure to tie these two differences together is one of the most common errors in statistics. HO: The sky is not falling. HA: The sky is falling. We will discuss the difference between practical and statistical throughout this session. We can affect the outcome of a statistical test simply by changing the sample size.

8 Practical vs. Statistical
Practical Difference: The difference which results in an improvement of practical or economic value to the company. Example, an improvement in yield from 96 to 99 percent. Statistical Difference: A difference or change to the process that probably (with some defined degree of confidence) did not happen by chance. Examples might include differences in suppliers, markets or servers. We will see that it is possible to realize a statistically significant difference without realizing a practically significant difference. Lets take a moment to explore the concept of Practical Differences versus Statistical Differences.

9 Detecting Significance
During the Measure Phase, it is important that the nature of the problem be well understood. In understanding the problem, the practical difference to be achieved must match the statistical difference. The difference can be either a change in the Mean or in the variance. Detection of a difference is then accomplished using statistical Hypothesis Testing. Mean Shift Variation Reduction An important concept to understand is the process of detecting a significant change. How much of a shift in the Mean will offset the cost in making a change to the process? This is not necessarily the full shift from the Business Case of your project. Realistically, how small or how large a delta is required? The larger the delta, the smaller the necessary sample will be because there will be a very small overlap of the distributions. The smaller the delta is, the larger the sample size has to be to be able to detect a statistical difference.

10 Hypothesis Testing A Hypothesis Test is an a priori theory relating to differences between variables. A statistical test or Hypothesis Test is performed to prove or disprove the theory. A Hypothesis Test converts the practical problem into a statistical problem. Since relatively small sample sizes are used to estimate population parameters, there is always a chance of collecting a non-representative sample. Inferential statistics allows us to estimate the probability of getting a non-representative sample. Please read the slide.

11 Pr (1 one) = 0.1667 Pr (5 ones) = (0.1667)5 = 0.00013
DICE Example We could throw it a number of times and track how many each face occurred. With a standard die, we would expect each face to occur 1/6 or 16.67% of the time. If we threw the die 5 times and got 5 one’s, what would you conclude? How sure can you be? Pr (1 one) = Pr (5 ones) = (0.1667)5 = There are approximately 1.3 chances out of 1000 that we could have gotten 5 ones with a standard die. Therefore, we would say we are willing to take a 0.1% chance of being wrong about our hypothesis that the die was “loaded” since the results do not come close to our predicted outcome. You have rolled dice before haven’t you? You know dice that you would find in a board game or in Las Vegas. Well assume that we suspect a single die is “Fixed.” Meaning it has been altered in some form or fashion to make a certain number appear more often that it rightfully should. Consider the example on how we would go about determining if in fact a die was loaded. If we threw the die five times and got five one’s, what would you conclude? How sure can you be? The probability of getting just one one. The probability of getting five ones.

12 α β n Type I Error Sample Size Type II Error Hypothesis Testing
DECISIONS When it comes to Hypothesis Testing, you must look at three focus points to help validate your claim. These points are Type I, Type II and Sample Size. Sample Size β n Type II Error

13 Statistical Hypotheses
A hypothesis is a predetermined theory about the nature of, or relationships between variables. Statistical tests can prove (with a certain degree of confidence) that a relationship exists. We have two alternatives for hypothesis. The “null hypothesis” Ho assumes that there are no differences or relationships. This is the default assumption of all statistical tests. The “alternative hypothesis” Ha states that there is a difference or relationship. Making a decision does not FIX a problem, taking action does. P-value > Ho = no difference or relationship P-value < Ha = is a difference or relationship With Hypothesis Testing the primary assumption is that the null hypothesis is true. Therefore statistically you can only reject or fail to reject the null hypothesis. If the null is rejected, this means that you have data that supports the alternative hypothesis.

14 Steps to Statistical Hypothesis Test
State the Practical Problem. State the Statistical Problem. HO: ___ = ___ HA: ___ ≠ ,>,< ___ Select the appropriate statistical test and risk levels. α = .05 β = .10 Establish the sample size required to detect the difference. State the Statistical Solution. State the Practical Solution. Noooot THAT practical solution! There are six steps to Hypothesis Testing, and they are: 1. State the Practical Problem. 2. State the Statistical Problem. 3. Select the appropriate statistical test and risk levels. –Your alpha may change depending on the problem at hand. An alpha of .05 is common in most manufacturing. In transactional projects, an alpha of 0.10 is common when dealing with human behavior. Being 90% confident that a change to a sale procedure will produce results is most likely a good approach. A not-so-common alpha is This is only used when it is necessary to make the null hypothesis very difficult to reject. 4. Establish the Sample Size required to detect the difference. 5. State the Statistical Solution. 6. State the Practical Solution.

15 The most commonly used levels are 5%, 10% and 1%.
How Likely is Unlikely? Any differences between observed data and claims made under H0 may be real or due to chance. Hypothesis Tests determine the probabilities of these differences occurring solely due to chance and call them P-values. The a level of a test (level of significance) represents the yardstick against which P-values are measured and H0 is rejected if the P-value is less than the alpha level. The most commonly used levels are 5%, 10% and 1%. Please read the slide.

16 Hypothesis Testing Risk
The alpha risk or Type 1 Error (generally called the “Producer’s Risk”) is the probability that we could be wrong in saying that something is “different.” It is an assessment of the likelihood that the observed difference could have occurred by random chance. Alpha is the primary decision-making tool of most statistical tests. Type 1 Error Type II Correct Decision Actual Conditions Not Different Different Statistical Conclusions (Ho is True) (Ho is False) (Fail to Reject Ho) (Reject Ho) Alpha risk can also be explained as: The risk with implementing a change when you should not . Alpha risk is typically lower than beta risk because you are more hesitant to make a mistake about claiming the significance of an X (and therefore spending money) as compared to overlooking an X (which is never revealed). There of two types of error Type I with an associated risk equal to alpha (the first letter in the Greek alphabet), and of course named the other one Type II with an associated risk equal to beta. The formula reads: alpha is equal to the probability of making a Type 1 error, or alpha is equal to the probability of rejecting the null hypothesis when the null hypothesis is true.

17 Alpha ( ) risks are expressed relative to a reference distribution.
Distributions include: t-distribution z-distribution 2- distribution F-distribution The a-level is represented by the clouded areas. Sample results in this area lead to rejection of H0. Region of DOUBT Accept as chance differences Please read the slide.

18 Hypothesis Testing Risk
The beta risk or Type 2 Error (also called the “Consumer’s Risk”) is the probability that we could be wrong in saying that two or more things are the same when, in fact, they are different. Type 1 Error Type II Correct Decision Actual Conditions Not Different Different Statistical Conclusions (Ho is True) (Ho is False) (Fail to Reject Ho) (Reject Ho) Another way to describe beta risk is failing to recognize an improvement. Chances are the sample size was inappropriate or the data was imprecise and/or inaccurate. Reading the formula: Beta is equal to the probability of making a Type 2 error. Or: Beta is equal to the probability of failing to reject the null hypothesis given that the null hypothesis is false.

19 Critical value of test statistic
Beta Risk Beta Risk is the probability of failing to reject the null hypothesis when a difference exists. Distribution if H0 is true Reject H0  = Pr(Type 1 error)  = 0.05 H0 value Accept H0 Distribution if Ha is true = Pr(Type II error) Beta and sample size are very closely related. When calculating Sample size in MINITABTM, we always enter the “power” of the test which is one minus beta. In doing so, we are establishing a sample size that will allow the proper overlap of distributions. Critical value of test statistic

20 Distinguishing between Two Samples
Recall from the Central Limit Theorem as the number of individual observations increase the Standard Error decreases. In this example when n=2 we cannot distinguish the difference between the Means (> 5% overlap, P-value > 0.05). When n=30, we can distinguish between the Means (< 5% overlap, P-value < 0.05) There is a significant difference. Theoretical Distribution of Means When n = 2 d = 5 S = 1 Theoretical Distribution of Means When n = 30 d = 5 S = 1 Please read the slide.

21 Delta Sigma—The Ratio between d and S
Delta (d) is the size of the difference between two Means or one Mean and a target value. Sigma (S) is the sample Standard Deviation of the distribution of individuals of one or both of the samples under question. When  & S is large, we don’t need statistics because the differences are so large. If the variance of the data is large, it is difficult to establish differences. We need larger sample sizes to reduce uncertainty. Large Delta Large S All samples are estimates of the population. All statistics based on samples are estimates of the equivalent population parameters. All estimates could be wrong! We want to be 95% confident in all of our estimates!

22 Typical Questions on Sampling
Question: “How many samples should we take?” Answer: “Well, that depends on the size of your delta and Standard Deviation”. Question: “How should we conduct the sampling?” Answer: “Well, that depends on what you want to know”. Question: “Was the sample we took large enough?” Answer: “Well, that depends on the size of your delta and Standard Deviation”. Question: “Should we take some more samples just to be sure?” Answer: “No, not if you took the correct number of samples the first time!” These are typical questions you will experience or hear during sampling. The most common answer is “It depends.”. Primarily because someone could say a sample of 30 is perfect where that may actually be too many. Point is you don’t know what the right sample is without the test.

23 The Perfect Sample Size
The minimum sample size required to provide exactly 5% overlap (risk). In order to distinguish the Delta. Note: If you are working with Non-normal Data, multiply your calculated sample size by 1.1 40 50 60 70 Population Please read the slide.

24 Hypothesis Testing Roadmap
Normal Test of Equal Variance 1 Sample t-test 1 Sample Variance Variance Not Equal Variance Equal 2 Sample T One Way ANOVA Continuous Data Here is a Hypothesis Testing roadmap for Continuous Data. This is a great reference tool while you are conducting Hypothesis Tests.

25 Hypothesis Testing Roadmap
Non Normal Test of Equal Variance Median Test Mann-Whitney Several Median Tests Continuous Data Here is another Hypothesis Testing roadmap for Continuous Data.

26 Hypothesis Testing Roadmap
Attribute Data One Factor Two Factors One Sample Proportion Two Sample Proportion Minitab: Stat - Basic Stats - 2 Proportions If P-value < 0.05 the proportions are different Chi Square Test (Contingency Table) Stat - Tables - Chi-Square Test If P-value < 0.05 the factors are not independent If P-value < 0.05 at least one proportion is different Two or More Samples Two Samples One Sample Attribute Data This is the Hypothesis Testing roadmap for Attribute Data.

27 Common Pitfalls to Avoid
While using Hypothesis Testing the following facts should be borne in mind at the conclusion stage: The decision is about Ho and NOT Ha. The conclusion statement is whether the contention of Ha was upheld. The null hypothesis (Ho) is on trial. When a decision has been made: Nothing has been proved. It is just a decision. All decisions can lead to errors (Types I and II). If the decision is to “Reject Ho,” then the conclusion should read “There is sufficient evidence at the α level of significance to show that “state the alternative hypothesis Ha.” If the decision is to “Fail to Reject Ho,” then the conclusion should read “There isn’t sufficient evidence at the α level of significance to show that “state the alternative hypothesis.” Please read the slide.

28 At this point, you should be able to:
Summary At this point, you should be able to: Articulate the purpose of Hypothesis Testing Explain the concepts of the Central Tendency Be familiar with the types of Hypothesis Tests Please read the slide.


Download ppt "Analyze Phase Introduction to Hypothesis Testing"

Similar presentations


Ads by Google