Presentation is loading. Please wait.

Presentation is loading. Please wait.

Psych 231: Research Methods in Psychology

Similar presentations


Presentation on theme: "Psych 231: Research Methods in Psychology"— Presentation transcript:

1 Psych 231: Research Methods in Psychology
Statistics (cont.) Psych 231: Research Methods in Psychology

2 Announcements Quiz 10 (chapter 7) is due on Nov. 13th at midnight
Journal Summary 2 assignment Due in class NEXT week (Wednesday, Nov. 18th) <- moved due date Group projects Plan to have your analyses done before Thanksgiving break, GAs will be available during lab times to help Poster sessions are last lab sections of the semester (last week of classes), so start thinking about your posters. I will lecture about poster presentations on the Monday before Thanksgiving break. Announcements

3 Statistics 2 General kinds of Statistics Descriptive statistics
Used to describe, simplify, & organize data sets Describing distributions of scores Inferential statistics Used to test claims about the population, based on data gathered from samples Takes sampling error into account. Are the results above and beyond what you’d expect by random chance? Population Inferential statistics used to generalize back Sample Statistics

4 Inferential Statistics
Purpose: To make claims about populations based on data collected from samples What’s the big deal? Population Sample A Treatment X = 80% Sample B No Treatment X = 76% Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s is 76% Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Inferential Statistics

5 Testing Hypotheses Step 1: State your hypotheses
Null hypothesis (H0) Alternative hypothesis(ses) (HA) Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis Reject H0 “statistically significant differences” Fail to reject H0 “not statistically significant differences” Type I error (α): concluding that there is an effect (a difference between groups) when there really isn’t. Type II error (β): concluding that there isn’t an effect, when there really is. Testing Hypotheses

6 Summary to this point XA XB About populations Example Experiment:
Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s is 76% H0: μA = μB H0: there is no difference between Grp A and Grp B Real world (‘truth’) H0 is correct H0 is wrong Experimenter’s conclusions Reject H0 Fail to Reject H0 Type I error Type II error Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Two sample distributions XA XB 76% 80% Summary to this point

7 Statistical significance
“Statistically significant differences” When you “reject your null hypothesis” Essentially this means that the observed difference is above what you’d expect by chance “Chance” is determined by estimating how much sampling error there is Factors affecting “chance” Sample size Population variability Statistical significance

8 (Pop mean - sample mean)
Population mean Population Distribution x Sampling error (Pop mean - sample mean) n = 1 Sampling error

9 (Pop mean - sample mean)
Population mean Population Distribution Sample mean x x Sampling error (Pop mean - sample mean) n = 2 Sampling error

10 (Pop mean - sample mean)
Generally, as the sample size increases, the sampling error decreases Population mean Population Distribution Sample mean x Sampling error (Pop mean - sample mean) n = 10 Sampling error

11 Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Large population variability Small population variability Sampling error

12 Sampling error Population Distribution of sample means
These two factors combine to impact the distribution of sample means. The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population Population Distribution of sample means XC Samples of size = n XA XD Avg. Sampling error XB “chance” Sampling error More info

13 “Generic” statistical test
Tests the question: Are there differences between groups due to a treatment? H0 is true (no treatment effect) Real world (‘truth’) H0 is correct H0 is wrong Experimenter’s conclusions Reject H0 Fail to Reject H0 Type I error Type II error Two possibilities in the “real world” One population Two sample distributions XA XB 76% 80% “Generic” statistical test

14 “Generic” statistical test
Tests the question: Are there differences between groups due to a treatment? Real world (‘truth’) H0 is correct H0 is wrong Experimenter’s conclusions Reject H0 Fail to Reject H0 Type I error Type II error Two possibilities in the “real world” H0 is true (no treatment effect) H0 is false (is a treatment effect) Two populations XA XB XB XA 76% 80% 76% 80% People who get the treatment change, they form a new population (the “treatment population) “Generic” statistical test

15 “Generic” statistical test
XB XA ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment Why might the samples be different? (What is the source of the variability between groups)? “Generic” statistical test

16 “Generic” statistical test
XB XA ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment The generic test statistic - is a ratio of sources of variability Observed difference TR + ID + ER ID + ER Computed test statistic = = Difference from chance “Generic” statistical test

17 Sampling error Population Difference from chance
The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population Population Distribution of sample means XC Samples of size = n XA XD Avg. Sampling error XB Difference from chance Sampling error

18 “Generic” statistical test
Things that affect the computed test statistic Size of the treatment effect The bigger the effect, the bigger the computed test statistic Difference expected by chance (sample error) Sample size Variability in the population TR + ID + ER ID + ER TR + ID + ER ID + ER XB XA XB XA TR + ID + ER ID + ER “Generic” statistical test

19 Some inferential statistical tests
1 factor with two groups T-tests Between groups: 2-independent samples Within groups: Repeated measures samples (matched, related) 1 factor with more than two groups Analysis of Variance (ANOVA) (either between groups or repeated measures) Multi-factorial Factorial ANOVA Some inferential statistical tests

20 T-test Design Formula: Observed difference X1 - X2 T =
2 separate experimental conditions Degrees of freedom Based on the size of the sample and the kind of t-test Formula: Observed difference T = X X2 Diff by chance Based on sample error Computation differs for between and within t-tests T-test

21 T-test Reporting your results
The observed difference between conditions Kind of t-test Computed T-statistic Degrees of freedom for the test The “p-value” of the test “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.” “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.” T-test

22 Analysis of Variance XB XA XC Designs Test statistic is an F-ratio
More than two groups 1 Factor ANOVA, Factorial ANOVA Both Within and Between Groups Factors Test statistic is an F-ratio Degrees of freedom Several to keep track of The number of them depends on the design Analysis of Variance

23 Analysis of Variance More than two groups F-ratio = XB XA XC
Now we can’t just compute a simple difference score since there are more than one difference So we use variance instead of simply the difference Variance is essentially an average difference Observed variance Variance from chance F-ratio = Analysis of Variance

24 1 factor ANOVA 1 Factor, with more than two levels XB XA XC
Now we can’t just compute a simple difference score since there are more than one difference A - B, B - C, & A - C 1 factor ANOVA

25 1 factor ANOVA The ANOVA tests this one!! XA = XB = XC XA ≠ XB ≠ XC
Null hypothesis: H0: all the groups are equal The ANOVA tests this one!! XA = XB = XC Do further tests to pick between these Alternative hypotheses HA: not all the groups are equal XA ≠ XB ≠ XC XA ≠ XB = XC XA = XB ≠ XC XA = XC ≠ XB 1 factor ANOVA

26 1 factor ANOVA Planned contrasts and post-hoc tests:
- Further tests used to rule out the different Alternative hypotheses XA ≠ XB ≠ XC Test 1: A ≠ B XA = XB ≠ XC Test 2: A ≠ C XA ≠ XB = XC Test 3: B = C XA = XC ≠ XB 1 factor ANOVA

27 1 factor ANOVA Reporting your results The observed differences
Kind of test Computed F-ratio Degrees of freedom for the test The “p-value” of the test Any post-hoc or planned comparison results “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p < 0.05). Groups B and C did not differ significantly from one another” 1 factor ANOVA

28 We covered much of this in our experimental design lecture
More than one factor Factors may be within or between Overall design may be entirely within, entirely between, or mixed Many F-ratios may be computed An F-ratio is computed to test the main effect of each factor An F-ratio is computed to test each of the potential interactions between the factors Factorial ANOVAs

29 Factorial ANOVAs Reporting your results The observed differences
Because there may be a lot of these, may present them in a table instead of directly in the text Kind of design e.g. “2 x 2 completely between factorial design” Computed F-ratios May see separate paragraphs for each factor, and for interactions Degrees of freedom for the test Each F-ratio will have its own set of df’s The “p-value” of the test May want to just say “all tests were tested with an alpha level of 0.05” Any post-hoc or planned comparison results Typically only the theoretically interesting comparisons are presented Factorial ANOVAs

30 Distribution of sample means
The following slides are available for a little more concrete review of distribution of sample means discussion. Distribution of sample means

31 Distribution of sample means
Distribution of sample means is a “theoretical” distribution between the sample and population Mean of a group of scores Comparison distribution is distribution of means Population Distribution of sample means Sample Distribution of sample means

32 Distribution of sample means
A simple case Population: 2 4 6 8 All possible samples of size n = 2 Assumption: sampling with replacement Distribution of sample means

33 Distribution of sample means
A simple case Population: 2 4 6 8 All possible samples of size n = 2 There are 16 of them mean 2 2 2 6 4 5 6 4 7 8 4 6 8 2 2 4 3 8 4 6 2 4 6 4 8 2 8 2 5 4 2 3 4 4 Distribution of sample means

34 Distribution of sample means
2 3 4 5 6 7 8 1 In long run, the random selection of tiles leads to a predictable pattern mean mean mean 2 2 2 4 6 5 8 2 5 2 4 3 4 8 6 8 4 6 2 6 4 6 2 4 8 6 7 2 8 5 6 4 5 8 8 8 4 2 3 6 6 6 4 4 4 6 8 7 Distribution of sample means

35 Properties of the distribution of sample means
Shape If population is Normal, then the dist of sample means will be Normal If the sample size is large (n > 30), regardless of shape of the population Distribution of sample means Population N > 30 Properties of the distribution of sample means

36 Properties of the distribution of sample means
Center The mean of the dist of sample means is equal to the mean of the population Population Distribution of sample means same numeric value different conceptual values Properties of the distribution of sample means

37 Properties of the distribution of sample means
Center The mean of the dist of sample means is equal to the mean of the population Consider our earlier example 2 4 6 8 Population Distribution of sample means means 2 3 4 5 6 7 8 1 4 μ = = 5 16 = = 5 Properties of the distribution of sample means

38 Properties of the distribution of sample means
Spread Standard deviation of the population Sample size Putting them together we get the standard deviation of the distribution of sample means Commonly called the standard error Properties of the distribution of sample means

39 The standard error is the average amount that you’d expect a sample (of size n) to deviate from the population mean In other words, it is an estimate of the error that you’d expect by chance (or by sampling) Standard error

40 All three of these properties are combined to form the Central Limit Theorem
For any population with mean μ and standard deviation σ, the distribution of sample means for sample size n will approach a normal distribution with a mean of and a standard deviation of as n approaches infinity (good approximation if n > 30). Central Limit Theorem

41 Distribution of sample means
Keep your distributions straight by taking care with your notation Population σ μ Distribution of sample means Sample s X Distribution of sample means Back


Download ppt "Psych 231: Research Methods in Psychology"

Similar presentations


Ads by Google