Statistics (cont.) Psych 231: Research Methods in Psychology.

Slides:



Advertisements
Similar presentations
Thursday, September 12, 2013 Effect Size, Power, and Exam Review.
Advertisements

Analysis of Variance (ANOVA) Statistics for the Social Sciences Psychology 340 Spring 2010.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Statistics for the Social Sciences Psychology 340 Spring 2005 Sampling distribution.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Independent Samples and Paired Samples t-tests PSY440 June 24, 2008.
Understanding Statistics in Research
Statistics for the Social Sciences Psychology 340 Spring 2005 Analysis of Variance (ANOVA)
Statistics for the Social Sciences
Using Statistics in Research Psych 231: Research Methods in Psychology.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Statistics for the Social Sciences
Tuesday, September 10, 2013 Introduction to hypothesis testing.
Talks & Statistics (wrapping up) Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Statistics Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, October 15, 2013 Analysis of Variance (ANOVA)
Statistics for the Social Sciences Psychology 340 Fall 2012 Analysis of Variance (ANOVA)
Statistics Psych 231: Research Methods in Psychology.
Statistics Psych 231: Research Methods in Psychology.
Reasoning in Psychology Using Statistics Psychology
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Reasoning in Psychology Using Statistics Psychology
Finishing up: Statistics & Developmental designs Psych 231: Research Methods in Psychology.
Statistics for the Social Sciences Psychology 340 Spring 2009 Analysis of Variance (ANOVA)
Statistics Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Inferential Statistics Psych 231: Research Methods in Psychology.
Reasoning in Psychology Using Statistics Psychology
Descriptive Statistics Psych 231: Research Methods in Psychology.
Inferential Statistics Psych 231: Research Methods in Psychology.
Psych 231: Research Methods in Psychology
Inferential Statistics
Non-Experimental designs
Central Limit Theorem, z-tests, & t-tests
Reasoning in Psychology Using Statistics
Statistics for the Social Sciences
Statistics for the Social Sciences
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Descriptive Statistics
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Inferential Statistics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Descriptive Statistics
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Presentation transcript:

Statistics (cont.) Psych 231: Research Methods in Psychology

Announcements Quiz 10 (chapter 7) is due on Nov. 13 th at midnight Journal Summary 2 assignment Due in class NEXT week (Wednesday, Nov. 18 th ) <- moved due date Group projects Plan to have your analyses done before Thanksgiving break, GAs will be available during lab times to help Poster sessions are last lab sections of the semester (last week of classes), so start thinking about your posters. I will lecture about poster presentations on the Monday before Thanksgiving break.

Statistics 2 General kinds of Statistics Descriptive statistics Used to describe, simplify, & organize data sets Describing distributions of scores Inferential statistics Used to test claims about the population, based on data gathered from samples Takes sampling error into account. Are the results above and beyond what you’d expect by random chance? Sample Inferential statistics used to generalize back Population

Inferential Statistics Purpose: To make claims about populations based on data collected from samples What’s the big deal?  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76%  Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76%  Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Population Sample A Treatment X = 80% Sample B No Treatment X = 76%

Testing Hypotheses Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis Reject H 0 “statistically significant differences” Fail to reject H 0 “not statistically significant differences” Step 1: State your hypotheses Null hypothesis (H 0 ) Alternative hypothesis(ses) (H A ) Type I error ( α ): concluding that there is an effect (a difference between groups) when there really isn’t. Type II error (β): concluding that there isn’t an effect, when there really is.

Summary to this point Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76%  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76% XAXA XBXB 76%80%  Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Two sample distributions H 0 : there is no difference between Grp A and Grp B H 0 : μ A = μ B About populations

Statistical significance “Statistically significant differences” When you “reject your null hypothesis” Essentially this means that the observed difference is above what you’d expect by chance “Chance” is determined by estimating how much sampling error there is Factors affecting “chance” Sample size Population variability

Sampling error n = 1 Population mean x Sampling error (Pop mean - sample mean) Population Distribution

Sampling error n = 2 Population mean x Population Distribution x Sampling error (Pop mean - sample mean) Sample mean

Sampling error n = 10 Population mean Population Distribution Sampling error (Pop mean - sample mean) Sample mean x x x x x x x x x x  Generally, as the sample size increases, the sampling error decreases

Sampling error Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Small population variability Large population variability

Sampling error These two factors combine to impact the distribution of sample means. The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population XAXA XBXB XCXC XDXD Population Samples of size = n Distribution of sample means Avg. Sampling error “chance” More info

“Generic” statistical test Tests the question: Are there differences between groups due to a treatment? One population Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error H 0 is true (no treatment effect) XAXA XBXB 76%80% Two possibilities in the “real world” Two sample distributions

“Generic” statistical test XAXA XBXB XAXA XBXB H 0 is true (no treatment effect) H 0 is false (is a treatment effect) Two populations Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error 76%80% 76%80% People who get the treatment change, they form a new population (the “treatment population) People who get the treatment change, they form a new population (the “treatment population) Tests the question: Are there differences between groups due to a treatment? Two possibilities in the “real world”

“Generic” statistical test XBXB XAXA Why might the samples be different? (What is the source of the variability between groups)? ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment

“Generic” statistical test The generic test statistic - is a ratio of sources of variability Observed difference Difference from chance = TR + ID + ER ID + ER = Computed test statistic XBXB XAXA ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment

Sampling error The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population XAXA XBXB XCXC XDXD Population Samples of size = n Distribution of sample means Avg. Sampling error Difference from chance

“Generic” statistical test Things that affect the computed test statistic Size of the treatment effect The bigger the effect, the bigger the computed test statistic Difference expected by chance (sample error) Sample size Variability in the population XBXB XAXA XBXB XAXA TR + ID + ER ID + ER TR + ID + ER ID + ER TR + ID + ER ID + ER

Some inferential statistical tests 1 factor with two groups T-tests Between groups: 2-independent samples Within groups: Repeated measures samples (matched, related) 1 factor with more than two groups Analysis of Variance (ANOVA) (either between groups or repeated measures) Multi-factorial Factorial ANOVA

T-test Design 2 separate experimental conditions Degrees of freedom Based on the size of the sample and the kind of t-test Formula: T = X 1 - X 2 Diff by chance Based on sample error Observed difference Computation differs for between and within t-tests

T-test Reporting your results The observed difference between conditions Kind of t-test Computed T-statistic Degrees of freedom for the test The “p-value” of the test “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.” “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.”

Analysis of Variance Designs More than two groups 1 Factor ANOVA, Factorial ANOVA Both Within and Between Groups Factors Test statistic is an F-ratio Degrees of freedom Several to keep track of The number of them depends on the design XBXB XAXA XCXC

Analysis of Variance More than two groups Now we can’t just compute a simple difference score since there are more than one difference So we use variance instead of simply the difference Variance is essentially an average difference Observed variance Variance from chance F-ratio = XBXB XAXA XCXC

1 factor ANOVA 1 Factor, with more than two levels Now we can’t just compute a simple difference score since there are more than one difference A - B, B - C, & A - C XBXB XAXA XCXC

1 factor ANOVA Null hypothesis: H 0 : all the groups are equal X A = X B = X C Alternative hypotheses H A : not all the groups are equal X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B The ANOVA tests this one!! Do further tests to pick between these XBXB XAXA XCXC

1 factor ANOVA Planned contrasts and post-hoc tests: - Further tests used to rule out the different Alternative hypotheses X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B Test 1: A ≠ B Test 2: A ≠ C Test 3: B = C

Reporting your results The observed differences Kind of test Computed F-ratio Degrees of freedom for the test The “p-value” of the test Any post-hoc or planned comparison results “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p < 0.05). Groups B and C did not differ significantly from one another” 1 factor ANOVA

Factorial ANOVAs We covered much of this in our experimental design lecture More than one factor Factors may be within or between Overall design may be entirely within, entirely between, or mixed Many F-ratios may be computed An F-ratio is computed to test the main effect of each factor An F-ratio is computed to test each of the potential interactions between the factors

Factorial ANOVAs Reporting your results The observed differences Because there may be a lot of these, may present them in a table instead of directly in the text Kind of design e.g. “2 x 2 completely between factorial design” Computed F-ratios May see separate paragraphs for each factor, and for interactions Degrees of freedom for the test Each F-ratio will have its own set of df’s The “p-value” of the test May want to just say “all tests were tested with an alpha level of 0.05” Any post-hoc or planned comparison results Typically only the theoretically interesting comparisons are presented

Distribution of sample means The following slides are available for a little more concrete review of distribution of sample means discussion.

Distribution of sample means Distribution of sample means is a “theoretical” distribution between the sample and population PopulationDistribution of sample meansSample Mean of a group of scores –Comparison distribution is distribution of means

Distribution of sample means A simple case Population: –All possible samples of size n = Assumption: sampling with replacement

Distribution of sample means A simple case Population: –All possible samples of size n = mean There are 16 of them

Distribution of sample means mean means In long run, the random selection of tiles leads to a predictable pattern

Properties of the distribution of sample means Shape If population is Normal, then the dist of sample means will be Normal Population Distribution of sample means N > 30 –If the sample size is large (n > 30), regardless of shape of the population

Properties of the distribution of sample means The mean of the dist of sample means is equal to the mean of the population PopulationDistribution of sample means same numeric value different conceptual values Center

Properties of the distribution of sample means Center The mean of the dist of sample means is equal to the mean of the population Consider our earlier example 2468 Population μ = = 5 Distribution of sample means means = = 5

Properties of the distribution of sample means Spread Standard deviation of the population Sample size Putting them together we get the standard deviation of the distribution of sample means –Commonly called the standard error

Standard error The standard error is the average amount that you’d expect a sample (of size n) to deviate from the population mean In other words, it is an estimate of the error that you’d expect by chance (or by sampling)

All three of these properties are combined to form the Central Limit Theorem –For any population with mean μ and standard deviation σ, the distribution of sample means for sample size n will approach a normal distribution with a mean of and a standard deviation of as n approaches infinity (good approximation if n > 30). Central Limit Theorem

Distribution of sample means Keep your distributions straight by taking care with your notation Sample s X Population σ μ Distribution of sample means Back