Presentation is loading. Please wait.

Presentation is loading. Please wait.

Understanding Statistics in Research

Similar presentations


Presentation on theme: "Understanding Statistics in Research"— Presentation transcript:

1 Understanding Statistics in Research

2 Reminders Final drafts of the APA Style Paper are due in lecture next Wednesday (Nov 19)

3 Statistics Descriptive Statistics Inferential Statistics
Describe the data you collected from the sample Inferential Statistics Making inferences about the population from the data collected from the sample Generalize results from study to the population

4 Inferential Statistics
Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s average memory score is 76% Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Would this difference be found in the population? This is what inferential statistics tests.

5 Inferential Statistics
This type of statistics is based on making hypotheses and testing them Five main steps to test hypotheses

6 Testing hypotheses Step 1: State your hypotheses
Step 2: Set your criteria Step 3: Collect your data from your sample Step 4: Compute your test statistics Step 5: Make a decision about your hypotheses Reject your null hypothesis Fail to reject your null hypothesis

7 Testing hypotheses Step 1: State your hypotheses Null hypothesis (H0):
“There are no differences between the groups” This is the hypothesis that you are testing! Alternative hypothesis (Ha): “There are effects/differences between the groups” This is what you expect to find!

8 State your hypotheses The Null hypothesis is always opposite the alternative hypothesis Example: Ha: The training group will be different from the control group H0: The training group will not be different from the control group Ha: The training group will perform better than the control group H0: The training group will not perform better than the control group (equal or worse)

9 State your hypotheses You are not attempting to prove your alternative hypotheses You are testing the null hypothesis If you reject the null hypothesis, then you are left with support for the alternative(s)

10 State your hypotheses In memory example experiment
Alternative- Ha: mean of Group A ≠ mean of Group B (Or: Group A > Group B)

11 State your hypotheses In memory example experiment
Alternative- Ha: mean of Group A ≠ mean of Group B (Or: Group A > Group B) Null- H0: mean of Group A = mean of Group B (Or: Group A  Group B)

12 State your hypotheses In memory example experiment
Alternative- Ha: mean of Group A ≠ mean of Group B (Or: Group A > Group B) Null- H0: mean of Group A = mean of Group B (Or: Group A  Group B) Which hypothesis do we test?

13 Testing hypotheses Step 1: State your hypotheses
Step 2: Set your decision criteria Your alpha level will tell you what to decide Reject the null hypothesis Fail to reject the null hypothesis

14 Set your decision criteria
Because you set the criteria it is possible that you could make the wrong decision Type I error: saying there is a difference when there really isn’t one Reject null when you should fail to reject alpha level = probably of making this type of error Type II error: saying there is not a difference when there really is one Fail to reject when you should have reject

15 Types of Errors Real world (truth) Type I error Type II error
H0 is correct (really are no differences) H0 is wrong (really are differences) Type I error Reject H0 (there are differences) Type II error Experimenter’s conclusions Type II error Fail to reject H0 (there are no differences)

16 Types of Errors Real world example: Courtroom Analogy
Real world (truth) Defendant really is innocent Defendant really is guilty Type I error Found guilty Jury’s decision Type II error Found not guilty

17 Types of Errors Type I error: concluding that there is an effect (a difference between groups) when there really isn’t Alpha level (a) Sometimes called “significance level” Pick a low level of alpha Psychology: 0.05 and 0.01 most common Type II error: concluding that there isn’t an effect, when there really is. Beta (b) Related to the Statistical Power of a test (1- b) How likely are you able to detect a difference if it is there

18 Testing hypotheses Step 1: State your hypotheses
Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Descriptive statistics (means, standard deviations, etc) Inferential statistics (t-tests, ANOVA, etc)

19 Testing hypotheses Step 1: State your hypotheses
Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your hypotheses Reject H0: Statistically significant differences Fail to reject H0: Not statistically significantly differences

20 Decision about your hypotheses
What does statistically significant differences mean? Reject the null hypothesis Support the alternative hypothesis The differences/effects you found are above what you’d expect by “chance”

21 Decision about your hypotheses
Level of chance is determined by how much sampling error there is More sampling error less likely to have a significant finding Sampling error is the difference between the sample and the population Influenced by: Sample Size Population variability

22 (Pop mean - sample mean)
Sampling Error Population mean Population Distribution x n = 1 Sampling error (Pop mean - sample mean)

23 (Pop mean - sample mean)
Sampling Error Population mean Population Distribution Sample mean x x Sampling error (Pop mean - sample mean) n = 2

24 (Pop mean - sample mean)
Sampling Error Population mean Population Distribution Sample mean x n = 10 As the sample size increases, the sampling error decreases Sampling error (Pop mean - sample mean)

25 Sampling Error Due to a smaller range of possible sample means, the sampling error decreases Large population variability Small population variability

26 Sampling Error Influenced by: Sample size Population variability
As sample size increases, the sampling error decreases Population variability As the population variability decreases, the sampling error decreases

27 Distribution of sample means
These two factors (pop variability and sample size) combine to impact the distribution of sample means. A distribution of all possible sample means of a particular sample size that can be drawn from the population Population Distribution of sample means XC Samples of size = n XA XD Avg. Sampling error XB “chance”

28 What is Significance? Rejecting the null hypothesis
A statistically significant difference means: The researcher is concluding that there is a difference above and beyond chance With the probability of making a type I error at 5% (assuming an alpha level = 0.05) Not the same thing as theoretical significance Only a statistical difference Doesn’t mean that it is an important difference

29 What is Non-significance?
Failing to reject the null hypothesis Can’t accept because can’t prove anything Usually not as interesting as rejecting the null Typically, check to see if you made a Type II error (failed to detect a difference that is really there) Check the statistical power (probability of finding an effect if there is one) of your test Sample size is too small Effects that you’re looking for are really small Check your method, may be too much variability

30 Testing hypotheses: Next lecture
Tests that calculate significance Generic statistical test T-test: 2 group means Analysis of Variance (ANOVA): more than 2 group means


Download ppt "Understanding Statistics in Research"

Similar presentations


Ads by Google