Presentation is loading. Please wait.

Presentation is loading. Please wait.

F-tests Testing hypotheses.

Similar presentations


Presentation on theme: "F-tests Testing hypotheses."— Presentation transcript:

1 F-tests Testing hypotheses

2 Starting Point Central aim of statistical tests:
Determining the likelihood of a value in a sample, given that the Null hypothesis is true: P(value|H0) H0: no statistically significant difference between sample & population (or between samples) H1: statistically significant difference between sample & population (or between samples) Significance level: P(value|H0) < 0.05 The central aim of statistical methods is to determine the likelihood of a value in a sample under the assumption that the Null hypothesis is true The H0 states that there is no statistically significant difference between your sample and a reference population (or between two samples) The H1 states the opposite, i.e. that there IS a statistically significant difference between your sample and a reference population (or between two samples It‘s quite important to note that this does not refer to the probablity of the value, but to the probablity of the value under the assumption that H0 is true In order to decide when we reject the H0, we have to set a significance level; this is normally at p < 0.05, i.e if the likelihood of a value in our sample is less or equal to 5%, given that H0 is true

3 Types of Error Population H0 H1 Sample b-error 1-a (Type II error)
a-error (Type I error) 1-b There are two types of error that you can make: A Type 1 (or alpha) error denotes a false positive result, i.e. that you accept the H1 in your data, even though the H0 is true Conversely, a type 2 (or beta) error denotes a false negative result, i.e. that you accept the H0, even though the H1 is true The two green fields describe the remaining probability that, given alpha (or beta), you are making the correct decision when you accept the H0 (true negative result) or reject the H0 (i.e. accept the H1) (true positive result) The way in which we decide whether a given value is highly unlikely (i.e. Statistically significant) is to look at the underlying distribution

4 F-tests / Analysis of Variance (ANOVA)
T-tests - inferences about 2 sample means But what if you have more than 2 conditions? e.g. placebo, drug 20mg, drug 40mg, drug 60mg Placebo vs. 20mg 20mg vs. 40mg Placebo vs 40mg 20mg vs. 60mg Placebo vs 60mg 40mg vs. 60mg Chance of making a type 1 error increases as you do more t-tests ANOVA controls this error by testing all means at once - it can compare k number of means. Drawback = loss of specificity

5 F-tests / Analysis of Variance (ANOVA)
Different types of ANOVA depending upon experimental design (independent, repeated, multi-factorial) Assumptions • observations within each sample were independent • samples must be normally distributed • samples must have equal variances

6 F-tests / Analysis of Variance (ANOVA)
obtained difference between sample means difference expected by chance (error) F = variance (differences) between sample means variance (differences) expected by chance (error) Difference between sample means is easy for 2 samples: (e.g. X1=20, X2=30, difference =10) but if X3=35 the concept of differences between sample means gets tricky

7 F-tests / Analysis of Variance (ANOVA)
Solution is to use variance - related to SD Standard deviation = Variance E.g. Set 1 Set 2 20 28 30 30 35 31 s2=58.3 s2=2.33 These 2 variances provide a relatively accurate representation of the size of the differences

8 F-tests / Analysis of Variance (ANOVA)
Simple ANOVA example Total variability Between treatments variance Measures differences due to: Treatment effects Chance Within treatments variance Measures differences due to: 1. Chance

9 F-tests / Analysis of Variance (ANOVA)
When treatment has no effect, differences between groups/treatments are entirely due to chance. Numerator and denominator will be similar. F-ratio should have value around 1.00 When the treatment does have an effect then the between-treatment differences (numerator) should be larger than chance (denominator). F-ratio should be noticeably larger than 1.00 MSbetween F = MSwithin

10 F-tests / Analysis of Variance (ANOVA)
Simple independent samples ANOVA example F(3, 8) = 9.00, p<0.05 There is a difference somewhere - have to use post-hoc tests (essentially t-tests corrected for multiple comparisons) to examine further Placebo Drug A Drug B Drug C Mean SD n

11 One way ANOVA computation
Table 1: Changes in catalase activity in Chlorella vulgaris during exposure to different environmental stresses. N = nitrogen, P = phosphorus, T = temperature and S = salinity Low N Low P High T Low T High S Total 9 7 11 12 10 8 13 19 6 16 14 5 4 23 3 15 Mean 6.90 11.00 13.40 12.00 10.06 St. dev. 1.83 2.13 2.49 4.50 3.74 4.01 Variance 3.33 4.54 6.22 20.27 14.00 16.06

12

13

14 Low N Low P High T Low T High S

15

16

17 Two way ANOVA computation

18

19

20

21 Gravetter & Wallnau - Statistics for the behavioural sciences
References Gravetter & Wallnau - Statistics for the behavioural sciences Last years presentation, thank you to: Louise Whiteley & Elisabeth Rounis Google


Download ppt "F-tests Testing hypotheses."

Similar presentations


Ads by Google