Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Statistics in Research Psych 231: Research Methods in Psychology.

Similar presentations


Presentation on theme: "Using Statistics in Research Psych 231: Research Methods in Psychology."— Presentation transcript:

1 Using Statistics in Research Psych 231: Research Methods in Psychology

2 Announcements Final Drafts of class experiment due in labs the week after Thanksgiving Don’t forget to look over the grading checklist in the PIP packet

3 Inferential Statistics Purpose: To make claims about populations based on data collected from samples What’s the big deal?  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76%  Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76%  Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?

4 Testing Hypotheses Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis “Reject H 0 ” “Fail to reject H 0 ” Step 1: State your hypotheses

5 Testing Hypotheses Step 1: State your hypotheses “There are no differences (effects)” Generally, “not all groups are equal” You aren’t out to prove the alternative hypothesis (although it feels like this is what you want to do) If you reject the null hypothesis, then you’re left with support for the alternative(s) (NOT proof!) This is the hypothesis that you are testing Null hypothesis (H 0 ) Alternative hypothesis(ses)

6 Testing Hypotheses Step 1: State your hypotheses  In our memory example experiment  Null H 0 : mean of Group A = mean of Group B  Alternative H A : mean of Group A ≠ mean of Group B  (Or more precisely: Group A > Group B)  It seems like our theory is that the treatment should improve memory.  That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics.  Instead, we test the H 0  In our memory example experiment  Null H 0 : mean of Group A = mean of Group B  Alternative H A : mean of Group A ≠ mean of Group B  (Or more precisely: Group A > Group B)  It seems like our theory is that the treatment should improve memory.  That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics.  Instead, we test the H 0

7 Testing Hypotheses Step 2: Set your decision criteria Your alpha level will be your guide for when to: “reject the null hypothesis” “fail to reject the null hypothesis” Step 1: State your hypotheses This could be correct conclusion or the incorrect conclusion Two different ways to go wrong Type I error: saying that there is a difference when there really isn’t one (probability of making this error is “alpha level”) Type II error: saying that there is not a difference when there really is one

8 Error types Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error

9 Error types: Courtroom analogy Real world (‘truth’) Defendant is innocent Jury’s decision Find guilty Type I error Type II error Defendant is guilty Find not guilty

10 Error types Type I error: concluding that there is an effect (a difference between groups) when there really isn’t. Sometimes called “significance level” We try to minimize this (keep it low) Pick a low level of alpha Psychology: 0.05 and 0.01 most common Type II error: concluding that there isn’t an effect, when there really is. Related to the Statistical Power of a test How likely are you able to detect a difference if it is there

11 Testing Hypotheses Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Descriptive statistics (means, standard deviations, etc.) Inferential statistics (t-tests, ANOVAs, etc.) Step 5: Make a decision about your null hypothesis Reject H 0 “statistically significant differences” Fail to reject H 0 “not statistically significant differences” Step 1: State your hypotheses Step 2: Set your decision criteria

12 Statistical significance “Statistically significant differences” When you “reject your null hypothesis” Essentially this means that the observed difference is above what you’d expect by chance “Chance” is determined by estimating how much sampling error there is Factors affecting “chance” Sample size Population variability

13 Sampling error n = 1 Population mean x Sampling error (Pop mean - sample mean) Population Distribution

14 Sampling error n = 2 Population mean x Population Distribution x Sampling error (Pop mean - sample mean) Sample mean

15 Sampling error n = 10 Population mean Population Distribution Sampling error (Pop mean - sample mean) Sample mean x x x x x x x x x x  Generally, as the sample size increases, the sampling error decreases

16 Sampling error Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Small population variability Large population variability

17 Sampling error These two factors combine to impact the distribution of sample means. The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population XAXA XBXB XCXC XDXD Population Samples of size = n Distribution of sample means Avg. Sampling error “chance”

18 Significance “A statistically significant difference” means: the researcher is concluding that there is a difference above and beyond chance with the probability of making a type I error at 5% (assuming an alpha level = 0.05) Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference

19 Non-Significance Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to detect a difference that is really there) Check the statistical power of your test Sample size is too small Effects that you’re looking for are really small Check your controls, maybe too much variability

20 Next time: Inferential Statistical Tests Different statistical tests “Generic test” T-test Analysis of Variance (ANOVA) Have a great Thanksgiving break


Download ppt "Using Statistics in Research Psych 231: Research Methods in Psychology."

Similar presentations


Ads by Google