2 Definition of Type I error (α) Concluded a statistical significant effect size from the sample when this does not exist in the whole population (H0 is true)Reject the null hypothesis (H0) when it is trueFalse positive resultLevel of significance - P valueUsually allowed for 5% - arbitrary but acceptable rate!Repeatedly sample and test from the same population, you will expect to make 1 wrong conclusion (with statistically significant result) in 20 of these tests if the null hypothesis is in fact true
3 Definition of Type II error (β) Concluded a non-statistical significant effect size from the sample when this exists in the whole population (H1 true)Fail to reject the null hypothesis (H0) when it is not trueFalse negative resultPower of study = 100% - False negative rateMaximum accepted false negative rate at 20% Minimum power of study at 80%
4 What does a significant result mean? Chance findings fall into this regionRemember we are trying to find enough evidence to reject the null hypothesisi.e. to show that the observed effect size has exceeded our pre-set threshold of expected events, hence a statistical significant resultObserved a significant result, are welucky to find out null hypothesis is indeed incorrect?A genuine findingunlucky to observed an unlikely result/event?Type I errorFindings fall in these 2 regions are too extreme to say these by chance
5 Can we eliminate these errors completely? Type I errorNot really, there is always a pre-set (5%) chance for a Type I errorYou could minimise this by reducing the level of significance from 5% to 1%, but you would expect a larger sample sizeType II errorNot really, there is always a pre-set (20%) chance for a Type II errorYou could minimise this by reducing the error rate from 20% to 10%, but you would compromise this with a larger sample size
6 Can we ever work out if Type I error exist in published results? Yes, if the result is statistically significant, then there is a possibility of a Type I errorSince we allowed for 5% chance to have this error, there is always a possibility of getting such error and there is no way we can get rid of this doubtWe could work out if there is any hint of an unusual/unlikely result by comparing results from other similar studiesTesting multiple outcomes issueP<0.05 means it is unlikely that the observed effect size is by chance (less than 5%)P<0.01 means it is unlikely that the observed effect size is by chance (less than 1%)
7 Can we ever work out if Type II error exist in published results? Yes, if the result is not statistically significant, then there is a possibility of a Type II errorThe key point is to identify if the author had performed a sample size calculationwith a pre-defined power before the study startson the one primary outcome they made conclusive statementwhether they had recruited the desired number at the analysis stagewhether they had pre-defined a clinically worthwhile effect size to be detected and was observedThen the chance of having a Type II error would be the pre-set value, i.e. 20%
8 Type I error (α) & Type II error (β) No True DifferenceTrue DifferenceNo Observed DifferenceWell DesignedTrial (100-)Type II Error()Observed DifferenceType I Error()Well PoweredTrial (100-)
9 Mind twisting quizzes!In a published paper, a statistically significant result of a fully powered 2-group study was reported on a single primary outcome. What can you comment on this result regarding Type I/II error?In a published paper, a non statistically significant result of a fully powered 2-group study was reported on a single primary outcome. What can you comment on this result regarding Type I/II error?You have reviewed 20 studies of a similar kind regarding the same outcome measure. You have found 5 studies with statistically significant results, while the others were not. What is your view on these results?