Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experimental Design & Analysis

Similar presentations


Presentation on theme: "Experimental Design & Analysis"— Presentation transcript:

1 Experimental Design & Analysis
Hypothesis Testing and Analysis of Variance January 30, 2007 DOCTORAL SEMINAR, SPRING SEMESTER 2007

2 Outline Statistical inferences Null hypothesis testing
Sources of variance Treatment variance vs. error variance Sums of squares

3 Statistical Inference
Means The mean is a measure of central tendency μ or x Sum of squares The sum of squares refers to the sum of the squared deviations of a set of scores or values from their mean (1) Subtract the mean from each score (2) Square each resulting difference (3) Add up the squared differences Mean square, or variance Mean square of a sample is a measure of variability Divide sum of squares by the degrees of freedom σ2 or s2 Standard deviation σ or s

4 Statistical Inference
Student's t-test is used to compare two groups when sample sizes are small (N < 30), but with larger samples the normal curve z test is used Let E be the experimental condition and let C be the control condition. Let m be the means, s the standard deviations, and n be the sample size t = (mE - mC) / √[(s2E + s2C) / n ] The critical value is the value found in t-tables

5 Statistical Inference
Hypothesis testing Inferences about the population based on parameters estimated from a sample μ1 = μ2 = μ3 = etc. suggests that there are no treatment effects Observing an effect requires falsifying the null hypothesis H0 How different are two mean scores?

6 Statistical Inference
Differences due to treatment Systematic source of difference by virtue of experimental condition Differences due to error Random source of difference because participants are randomly assigned to groups

7 Statistical Inferences
Evaluation of null hypothesis differences between experimental groups of subjects differences among subjects within same groups experimental error = 1 treatment effects + experimental error experimental error = 1 ?

8 Null Hypothesis Significance Testing
5 steps of NHST State H0, H1, and  level Determine rejection region and state rejection rule Compute test statistic Make a decision (reject H0 or fail to reject H0) Conclude in terms of the problem scenario

9 Statistical Inference
When H0 is true, F value = 1, although random variation can result in it being > 1 or < 1 When H0 is false, F expected >1 Since random variation can account for why F > 1, the problem we face is determining how much >1 it must be in order for us to conclude, on the basis of improbability, that its size indicates a real difference in the treatments The F distribution enables us to determine how improbable an experimental outcome is under the assumption our null hypothesis is true. If the value for F is so large that it is improbable (p < .05 or p < .01), given that our null hypothesis is true, we will conclude our null hypothesis is false and that the hypothetical population means are not equal

10 Sources of Variance Partitioning variance
In order to distinguish systematic variability (treatment effects) from random error, we must partition variance by calculating component deviation scores Total deviation Between-groups deviation Within-groups deviation

11 Component Deviations Total deviation Within-group deviation
2 4 6 8 10 Y2 Y5,2 YT - Y2 - YT Total deviation Within-group deviation Between-group deviation From Keppel & Wickens, p. 23

12 Sources of Variance What is the grand mean, YT?
What are the group means? Treatment A1 Treatment A2 Treatment A3 Participant Participant Participant Participant Participant Participant Participant Participant Participant 4 10 Participant Participant Participant Participant Participant Participant A1 = ? A2 = ? A3 = ? T = ? Y2 = ? Y3 = ? YT = ? Y1 = ? From Keppel & Wickens, p. 24

13 Calculating Sums of Squares
Identify bracket terms* Using sums [Y] = ΣYij2 = … = 1,890 [A] = ΣAj2 /n = ( )/5 = 1,710 [T] = ΣT2/an = 1502/15 = 22,500/15 = 1,500 Using means [A] = nΣYj2 = (5)( ) = 1,710 [T] = anΣYT2 = (3)(5)102 = 15 (100) = 1,500 See authors’ note on bracket terms, Keppel and Wickens, p. 31.

14 Sums of SquaresTotal = Sums of SquaresBetween + Sums of SquaresWithin
Sources of Variance Calculate sums of squares Total sums of squares = Σ(Yij – YT)2 Between sums of squares = Σ(Yj – YT)2 Within sums of squares = Σ(Yij – Yj)2 Sums of SquaresTotal = Sums of SquaresBetween + Sums of SquaresWithin

15 Calculating Sums of Squares
Write sums of squares using bracket terms SST = [Y] – [T] = 1,890 – 1,500 = 390 SSA = [A] – [T] = 1,710 – 1,500 = 210 SSS/A = [Y] – [A] = 1,890 – 1,710 = 180

16 Analysis of Variance Sums of squares provide the building blocks for analysis of variance and significance testing Analysis of variance involves 2 steps Calculate variance estimates, known as mean squares, by dividing component sums of squares by degrees of freedom Ratio of between-group mean square and within-group mean square provides F statistic

17 Analysis of Variance Source df Mean Square F A a-1 SSA/dfA MSA/ MSS/A
S/A a (n-1) SSS/A/dfS/A Total an-1 a = # of levels of factor A n = sample size of a group df = degrees of freedom Check F value in table of critical values organized by degrees of freedom to determine significance level Numerator degrees of freedom = a-1 Denominator degrees of freedom = a(n-1)


Download ppt "Experimental Design & Analysis"

Similar presentations


Ads by Google