Presentation on theme: "Multiple-choice question. Solution A. No, the F distribution has TWO parameters. B. The mean and variance are NOT the parameters of the F distribution."— Presentation transcript:
Solution A. No, the F distribution has TWO parameters. B. The mean and variance are NOT the parameters of the F distribution. C. Yes, these are the right parameters. D. Reject: we have a good candidate in C. We go for C.
Lecture 3 BETWEEN SUBJECTS FACTORIAL EXPERIMENTS
Analysis of variance (ANOVA) Analysis of variance is a statistical technique used to make comparisons among the mean scores for the conditions making up complex experiments, with three or more treatment conditions.
Factors and levels In analysis of variance (ANOVA), a FACTOR is a set of related treatments, categories or conditions, which are termed the LEVELS of the factor. A factor is an independent variable. Last week, we used ANOVA to analyse the results of an experiment with ONE treatment factor. There were five levels: four different drug conditions and a comparison or control condition. The dependent variable was performance on a skilled task.
ANOVA models The making of any statistical test presupposes the correctness of an interpretation of the data (usually in the form of an equation) known as a MODEL. There are different ANOVA models for different types of experimental design. Last week, I described the simplest kind of ANOVA, namely, the ONE-WAY ANOVA. The one-way ANOVA is appropriate for BETWEEN SUBJECTS experiments with one treatment factor.
Accounting for variability The building block for any variance estimate is a DEVIATION of some sort. The TOTAL DEVIATION of any score from the grand mean (GM) can be divided into 2 components: 1. a BETWEEN GROUPS component; 2. a WITHIN GROUPS component. total deviation between groups deviation within groups deviation grand mean
Breakdown (partition) of the total sum of squares If you sum the squares of the deviations over all 50 scores, you obtain an expression which breaks down the total variability in the scores into between groups and within groups components.
The F ratio
Calculating MS within In the equal-n case, we can simply take the mean of the cell variance estimates. MS within = ( )/5 =48.36/5 = 9.67
Rule for obtaining the df
Degrees of freedom The degrees of freedom df of a sum of squares is the number of independent values (scores, means) minus the number of parameters estimated. The SS between is calculated from 5 group means, but ONE parameter (the grand mean) has been estimated. Therefore df between = 5 – 1 = 4.
Degrees of freedom … The SS within is calculated from the scores of the 50 participants in the experiment; but the group mean is subtracted from each score to produce a deviation score. There are 5 group means. The df within = 50 – 5 = 45.
Finding MS between
The statistic F is calculated by dividing the between groups MS by the within groups MS thus
What F is measuring If there are differences among the population means, the numerator will be inflated and F will increase. If there are no differences, F will be close to 1. error + real differences error only
The ANOVA summary table F large, nine times larger than unity, the expected value from the null hypothesis and well over the critical value The p-value (Sig.) <.01. So F is significant beyond the.01 level. Write this result as follows: with an alpha-level of.05, F is significant: F(4, 45) = 9.09; p <.01. Do NOT write the p-value as.000! Notice that SS total = SS between groups + SS within groups
Finding the exact p-value Double-click on the table in the SPSS Viewer. A shaded border will appear. Select the entry under Sig. (the p-value).
Cell properties Right-click the mouse to get Cell Properties… Reset the number of decimal places displayed to 7. Click the OK button.
More places of decimals The number expresses the p-value to 7 places of decimals. If we had specified 8 places, we should have obtained the value the value This is for your information only: REPORT THE p- VALUE to TWO places of decimals thus: p <.01.
Effect size in the t-test We obtained a difference between the Caffeine and Placebo means of (11.90 – 9.25) = 2.75 score points. If we take the spread of the scores to be the average of the Caffeine and Placebo SDs, we have an average SD of about 3.25 score points. So the means of the Caffeine and Placebo groups differ by about.8 SD.
Measuring effect size: Cohens d statistic In our example, the value of Cohens d is 2.75/3.25 =.8. Is this a large difference?
Levels of effect size On the basis of scrutiny of a large number of studies, Jacob Cohen proposed that we regard a d of.2 as a SMALL effect size, a d of.5 as a MEDIUM effect size and a d of.8 as a LARGE effect size. So our experimental result is a large effect. When you report the results of a statistical test, you are now expected to provide a measure of the size of the effect you are reporting.
Effect size in ANOVA The greater the differences among the means, the greater will be the proportion of the total variability that is explained or accounted for by SS between. This is the basis of the oldest measure of effect size in ANOVA, which is known as ETA SQUARED (η 2 ).
Eta squared Eta squared (also known as the CORRELATION RATIO) is defined as the ratio of the between groups and within groups mean squares. Its theoretical range of variation is from zero (no differences among the means) to unity (no variance in the scores of any group, but different values in different groups). In our example, η 2 =.447
Comparison of eta squared with Cohens d
Positive bias of eta squared The correlation ratio (eta squared) is positively biased as an estimator. Imagine you were to have unthinkably huge numbers of participants in all the groups and calculate eta squared. This is the population value, which we shall term ρ 2 (rho squared). Imagine your own experiment (with the same numbers of participants) were to be repeated many times and you were to calculate all the values of eta squared. The mean value of eta squared would be higher than that of rho squared.
Removing the bias: omega squared The measure known as OMEGA SQUARED corrects the bias in eta squared. Omega squared achieves this by incorporating degrees of freedom terms.
Factorial experiments An experiment with two or more factors is known as a FACTORIAL experiment. A drug is known to enhance the performance of tired people on a driving simulator. It is suspected, however, that this drug may have a different effect upon the performance of people who have had a good nights rest.
A two-factor, between subjects factorial experiment Four groups of participants are tested under the conditions shown left. There are TWO factors in this experiment. There is Drug Condition, with 2 levels (Placebo and Dose). There is Body State, also with 2 levels (Fresh and Tired).
The scientific hypotheses If the investigator is right, Group 4 should outperform Group 3, that is, the drug should help tired participants. With fresh participants, however, the drug may actually have an adverse effect upon performance: Group 1 may outperform Group 2.
The results The raw scores are shown at upper left. They are summarised in the table of statistics at lower left. The four central squares in the lower table are known as CELLS. The shaded MARGINS contain the means of the rows and columns.
What happened? Each set of marginal means shows the effect of one factor, while ignoring the other. The row means show only the effect of the factor of Body State, averaged over the two Drug conditions. The column means show only the effect of the drug, averaged over the two body states.
The marginal means Unsurprisingly, the rested participants outperformed the sleep-deprived participants. More interestingly, the drug and placebo participants performed at similar levels.
Main effects In a factorial ANOVA, a factor is said to have a MAIN EFFECT when the means (averaged over the other factors) are not the same at all its levels. The factor of Body State appears to have a main effect.
No main effect of Drug The column means show no sign of a main effect of the Drug factor.
What the cell means show Its the CELL MEANS that hold the answer to our research question. They show that a dose of the drug IMPROVED the performance of the tired participants; whereas a dose of the drug actually IMPAIRED the performance of the fresh participants.
A clustered bar chart
Clustered bar chart … The Category Variable is Body State. The Cluster Variable is Drug. The Cluster Variable obviously has opposite effects when participants are in different body states.
Interaction An INTERACTION between two factors is said to occur when the effects of a factor are not the same at all levels of the other. The CELL MEANS show that the Drug factor has opposite effects on fresh and tired participants. This pattern suggests an INTERACTION between the factors of Drug and Bodily State. An interaction is often denoted by the use of a multiplication sign. We have been looking at a Body State × Drug interaction.
Line graph (profile plot) A LINE GRAPH (or PROFILE PLOT) is a useful way of depicting an interaction. Each line shows the performance profile of one of the Drug groups across the two Body State conditions. When the lines CROSS, CONVERGE or DIVERGE, an interaction is indicated. When the lines are PARALLEL, there is no interaction.
A more complex example
The two-way ANOVA In the two-way ANOVA, there are THREE F tests: 1.There is a test for a main effect of the Drug factor. 2.There is a test for a main effect of the Body State factor. 3.There is a test for the presence of an interaction.
The three between groups mean squares In the one-way ANOVA, there is just ONE between groups mean square; but in the two- way ANOVA, there are THREE: 1.A between-groups mean square for the Bodily State factor is calculated from the marginal ROW means. 2.A between groups mean square for the Drug factor is calculated from the marginal COLUMN means. 3.The interaction mean square is calculated from the values of the CELL MEANS, from which main effects have been removed.
The error term The denominator of an F-ratio is known as the ERROR TERM. The error mean square ( MS within groups or MS within ) is calculated by averaging the variances of the scores within the cells of the table. In the two-way ANOVA, the three F-tests are made by dividing the main effect and interaction mean squares by the same error term.
The df of MS within As in the one-way ANOVA, the degrees of freedom of MS within is the sum of the degrees of freedom of the separate variance estimates that are being averaged. In this example, four variance estimates are being averaged, each with df = 3. The MS within therefore has df = 12.
Testing for a main effect of Body State
Testing for a main effect of Drug
Testing for an interaction
The parameters of the three F distributions The distribution of F has TWO parameters: 1.The degrees of freedom of the MS of the factor (or interaction) being tested. 2.The degrees of freedom of the MS within. In this particular example, all three F tests refer to the same F distribution but, more usually, the three between groups mean squares will have different degrees of freedom.
Rules for df For main effect sources, df = (number of conditions – 1). For interactions, df = (degrees of freedom of Factor 1) × (degrees of freedom of Factor 2): just multiply the degrees of freedom of the two factor sources together. In the present example, all three between groups mean squares have ONE degree of freedom.
The two-way ANOVA summary table
Observations There is no main effect of Drug. There is a significant main effect of Body State. There is a significant interaction between Body State and Drug.
Observations … Each F ratio is obtained by dividing the MS for that row by MS within : e.g. for the State factor, F = / = The df(within) = 12, the sum of the df for the four cell variances, the mean of which is MS within.
Observations … Notice that SS total = SS state + SS Drug + SS interaction df total = df State + df Drug + df interaction
A more complex example
Two-way ANOVA with SPSS Give your factors informative labels. Assign clear Value Labels. You will need TWO grouping variables. Avoid clutter in Data View by resetting the Decimals to zero.
Appearance of Data View
Choosing the two-way ANOVA First, General Linear Model. Choose Univariate, because there is just one dependent variable. Multivariate tests are used when there are two or more dependent variables.
Fixed and random factors A FIXED factor is one whose levels are chosen systematically, rather than at random. A RANDOM factor is one whose levels are chosen at random. Random factors are rare in experimental psychology. But they occur in applied areas, such as health psychology.
The Univariate dialog box
The Options dialog As well as the results of the F tests, we shall require statistics such as the cell and marginal means to show us what happened in the experiment. Activate the Descriptive Statistics button, select all the variables in the left panel and transfer them to the Display Means for panel on the right.
The two-way ANOVA summary table The terms Intercept, Corrected Model and Total refer to the regression method used to implement the ANOVA. The Corrected Total (766) is the total that we want. In the Output window, edit out the superfluous information to produce a more readable table.
The simplified table The F-tests confirm our expectations. The Drug factor has no main effect. Body State has a main effect. There is indeed an interaction.
Reporting the findings The patterns apparent in Table xxxx are confirmed by formal statistical tests. With an alpha-level of.05, there was no significant effect of Drug: F(1, 12) =.01; p =.91. There was a significant effect of Body State: F(1, 12) = 8.95; p =.01. There was a significant interaction: F(1, 12) = 22.92; p <.01.