Multiple-choice question

Slides:



Advertisements
Similar presentations
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
Advertisements

C82MST Statistical Methods 2 - Lecture 2 1 Overview of Lecture Variability and Averages The Normal Distribution Comparing Population Variances Experimental.
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
1 Contact details Colin Gray Room S16 (occasionally) address: Telephone: (27) 2233 Dont hesitate to get in touch.
ANALYSIS OF VARIANCE (ONE WAY)
Chapter 15 ANOVA.
Chapter Thirteen The One-Way Analysis of Variance.
1 1 Slide © 2009, Econ-2030 Applied Statistics-Dr Tadesse Chapter 10: Comparisons Involving Means n Introduction to Analysis of Variance n Analysis of.
Independent Sample T-test Formula
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Analysis of Variance: ANOVA. Group 1: control group/ no ind. Var. Group 2: low level of the ind. Var. Group 3: high level of the ind var.
Experimental Design & Analysis
Lesson #23 Analysis of Variance. In Analysis of Variance (ANOVA), we have: H 0 :  1 =  2 =  3 = … =  k H 1 : at least one  i does not equal the others.
Chapter 3 Analysis of Variance
PSY 307 – Statistics for the Behavioral Sciences
Intro to Statistics for the Behavioral Sciences PSYC 1900
Lecture 9: One Way ANOVA Between Subjects
Chapter 10 - Part 1 Factorial Experiments.
One-way Between Groups Analysis of Variance
Statistics for the Social Sciences
Intro to Statistics for the Behavioral Sciences PSYC 1900
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Introduction to Analysis of Variance (ANOVA)
Chapter 12: Analysis of Variance
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
ANOVA Chapter 12.
The basic idea So far, we have been comparing two samples
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
1 Level of Significance α is a predetermined value by convention usually 0.05 α = 0.05 corresponds to the 95% confidence level We are accepting the risk.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Lecturer’s desk INTEGRATED LEARNING CENTER ILC 120 Screen Row A Row B Row C Row D Row E Row F Row G Row.
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
One-way Analysis of Variance 1-Factor ANOVA. Previously… We learned how to determine the probability that one sample belongs to a certain population.
Psychology 301 Chapters & Differences Between Two Means Introduction to Analysis of Variance Multiple Comparisons.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Testing Hypotheses about Differences among Several Means.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
INTRODUCTION TO ANALYSIS OF VARIANCE (ANOVA). COURSE CONTENT WHAT IS ANOVA DIFFERENT TYPES OF ANOVA ANOVA THEORY WORKED EXAMPLE IN EXCEL –GENERATING THE.
One-Way ANOVA ANOVA = Analysis of Variance This is a technique used to analyze the results of an experiment when you have more than two groups.
1 G Lect 11a G Lecture 11a Example: Comparing variances ANOVA table ANOVA linear model ANOVA assumptions Data transformations Effect sizes.
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Previous Lecture: Phylogenetics. Analysis of Variance This Lecture Judy Zhong Ph.D.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Econ 3790: Business and Economic Statistics Instructor: Yogesh Uppal
Hypothesis test flow chart frequency data Measurement scale number of variables 1 basic χ 2 test (19.5) Table I χ 2 test for independence (19.9) Table.
IE241: Introduction to Design of Experiments. Last term we talked about testing the difference between two independent means. For means from a normal.
One-Way Analysis of Variance Recapitulation Recapitulation 1. Comparing differences among three or more subsamples requires a different statistical test.
Statistics for Political Science Levin and Fox Chapter Seven
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
Oneway/Randomized Block Designs Q560: Experimental Methods in Cognitive Science Lecture 8.
EDUC 200C Section 9 ANOVA November 30, Goals One-way ANOVA Least Significant Difference (LSD) Practice Problem Questions?
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Analysis of Variance 11/6. Comparing Several Groups Do the group means differ? Naive approach – Independent-samples t-tests of all pairs – Each test doesn't.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Fall 2015 Room 150 Harvill.
Formula for Linear Regression y = bx + a Y variable plotted on vertical axis. X variable plotted on horizontal axis. Slope or the change in y for every.
1/54 Statistics Analysis of Variance. 2/54 Statistics in practice Introduction to Analysis of Variance Analysis of Variance: Testing for the Equality.
Factorial BG ANOVA Psy 420 Ainsworth. Topics in Factorial Designs Factorial? Crossing and Nesting Assumptions Analysis Traditional and Regression Approaches.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
i) Two way ANOVA without replication
Econ 3790: Business and Economic Statistics
Chapter 11 Analysis of Variance
One way ANALYSIS OF VARIANCE (ANOVA)
Statistics for the Social Sciences
Chapter 10 – Part II Analysis of Variance
Presentation transcript:

Multiple-choice question

Solution A. No, the F distribution has TWO parameters. B. The mean and variance are NOT the parameters of the F distribution. C. Yes, these are the right parameters. D. Reject: we have a good candidate in C. We go for C.

Multiple-choice example

Solution

Lecture 3 BETWEEN SUBJECTS FACTORIAL EXPERIMENTS

Analysis of variance (ANOVA) Analysis of variance is a statistical technique used to make comparisons among the mean scores for the conditions making up complex experiments, with three or more treatment conditions.

Factors and levels In analysis of variance (ANOVA), a FACTOR is a set of related treatments, categories or conditions, which are termed the LEVELS of the factor. A factor is an independent variable. Last week, we used ANOVA to analyse the results of an experiment with ONE treatment factor. There were five levels: four different drug conditions and a comparison or control condition. The dependent variable was performance on a skilled task.

ANOVA models The making of any statistical test presupposes the correctness of an interpretation of the data (usually in the form of an equation) known as a MODEL. There are different ANOVA models for different types of experimental design. Last week, I described the simplest kind of ANOVA, namely, the ONE-WAY ANOVA. The one-way ANOVA is appropriate for BETWEEN SUBJECTS experiments with one treatment factor.

Accounting for variability grand mean Accounting for variability total deviation between groups deviation within groups deviation The building block for any variance estimate is a DEVIATION of some sort. The TOTAL DEVIATION of any score from the grand mean (GM) can be divided into 2 components: 1. a BETWEEN GROUPS component; 2. a WITHIN GROUPS component.

Breakdown (partition) of the total sum of squares If you sum the squares of the deviations over all 50 scores, you obtain an expression which breaks down the total variability in the scores into between groups and within groups components.

The F ratio

Calculating MSwithin In the equal-n case, we can simply take the mean of the cell variance estimates. MSwithin = (3.33 + 4.54 + 6.22 + 20.27 + 14.00)/5 =48.36/5 = 9.67

Rule for obtaining the df

Degrees of freedom The degrees of freedom df of a sum of squares is the number of independent values (scores, means) minus the number of parameters estimated. The SSbetween is calculated from 5 group means, but ONE parameter (the grand mean) has been estimated. Therefore dfbetween = 5 – 1 = 4.

Degrees of freedom … The SSwithin is calculated from the scores of the 50 participants in the experiment; but the group mean is subtracted from each score to produce a deviation score. There are 5 group means. The dfwithin = 50 – 5 = 45.

Finding MSbetween

The statistic F is calculated by dividing the between groups MS by the within groups MS thus

What F is measuring If there are differences among the population means, the numerator will be inflated and F will increase. If there are no differences, F will be close to 1. error + real differences error only

The ANOVA summary table F large, nine times larger than unity, the expected value from the null hypothesis and well over the critical value 2.58. The p-value (Sig.) <.01. So F is significant beyond the .01 level. Write this result as follows: ‘with an alpha-level of .05, F is significant: F(4, 45) = 9.09; p <.01’. Do NOT write the p-value as ‘.000’! Notice that SStotal= SSbetween groups + SSwithin groups

Finding the exact p-value Double-click on the table in the SPSS Viewer. A shaded border will appear. Select the entry under ‘Sig.’ (the p-value).

Cell properties Right-click the mouse to get Cell Properties… Reset the number of decimal places displayed to 7. Click the OK button.

More places of decimals The number .0000182 expresses the p-value to 7 places of decimals. If we had specified 8 places, we should have obtained the value the value .00001815 . This is for your information only: REPORT THE p-VALUE to TWO places of decimals thus: ‘p < .01’.

Effect size in the t-test We obtained a difference between the Caffeine and Placebo means of (11.90 – 9.25) = 2.75 score points. If we take the spread of the scores to be the average of the Caffeine and Placebo SDs, we have an average SD of about 3.25 score points. So the means of the Caffeine and Placebo groups differ by about .8 SD.

Measuring effect size: Cohen’s d statistic In our example, the value of Cohen’s d is 2.75/3.25 = .8 . Is this a ‘large’ difference?

Levels of effect size On the basis of scrutiny of a large number of studies, Jacob Cohen proposed that we regard a d of .2 as a SMALL effect size, a d of .5 as a MEDIUM effect size and a d of .8 as a LARGE effect size. So our experimental result is a ‘large’ effect. When you report the results of a statistical test, you are now expected to provide a measure of the size of the effect you are reporting.

Effect size in ANOVA The greater the differences among the means, the greater will be the proportion of the total variability that is ‘explained’ or accounted for by SSbetween. This is the basis of the oldest measure of effect size in ANOVA, which is known as ETA SQUARED (η2).

Eta squared Eta squared (also known as the CORRELATION RATIO) is defined as the ratio of the between groups and within groups mean squares. Its theoretical range of variation is from zero (no differences among the means) to unity (no variance in the scores of any group, but different values in different groups). In our example, η2 = .447

Comparison of eta squared with Cohen’s d

Positive bias of eta squared The correlation ratio (eta squared) is positively biased as an estimator. Imagine you were to have unthinkably huge numbers of participants in all the groups and calculate eta squared. This is the population value, which we shall term ρ2 (rho squared). Imagine your own experiment (with the same numbers of participants) were to be repeated many times and you were to calculate all the values of eta squared. The mean value of eta squared would be higher than that of rho squared.

Removing the bias: omega squared The measure known as OMEGA SQUARED corrects the bias in eta squared. Omega squared achieves this by incorporating degrees of freedom terms.

Factorial experiments An experiment with two or more factors is known as a FACTORIAL experiment. A drug is known to enhance the performance of tired people on a driving simulator. It is suspected, however, that this drug may have a different effect upon the performance of people who have had a good night’s rest.

A two-factor, between subjects factorial experiment Four groups of participants are tested under the conditions shown left. There are TWO factors in this experiment. There is Drug Condition, with 2 levels (Placebo and Dose). There is Body State, also with 2 levels (Fresh and Tired).

The scientific hypotheses If the investigator is right, Group 4 should outperform Group 3, that is, the drug should help tired participants. With fresh participants, however, the drug may actually have an adverse effect upon performance: Group 1 may outperform Group 2.

The results The raw scores are shown at upper left. They are summarised in the table of statistics at lower left. The four central squares in the lower table are known as CELLS. The shaded MARGINS contain the means of the rows and columns.

What happened? Each set of marginal means shows the effect of one factor, while ignoring the other. The row means show only the effect of the factor of Body State, averaged over the two Drug conditions. The column means show only the effect of the drug, averaged over the two body states.

The marginal means Unsurprisingly, the rested participants outperformed the sleep-deprived participants. More interestingly, the drug and placebo participants performed at similar levels.

Main effects In a factorial ANOVA, a factor is said to have a MAIN EFFECT when the means (averaged over the other factors) are not the same at all its levels. The factor of Body State appears to have a main effect.

No main effect of Drug The column means show no sign of a main effect of the Drug factor.

What the cell means show It’s the CELL MEANS that hold the answer to our research question. They show that a dose of the drug IMPROVED the performance of the tired participants; whereas a dose of the drug actually IMPAIRED the performance of the fresh participants.

A clustered bar chart

Clustered bar chart … The Category Variable is Body State. The Cluster Variable is Drug. The Cluster Variable obviously has opposite effects when participants are in different body states.

Interaction An INTERACTION between two factors is said to occur when the effects of a factor are not the same at all levels of the other. The CELL MEANS show that the Drug factor has opposite effects on fresh and tired participants. This pattern suggests an INTERACTION between the factors of Drug and Bodily State. An interaction is often denoted by the use of a multiplication sign. We have been looking at a Body State × Drug interaction.

Line graph (profile plot) A LINE GRAPH (or PROFILE PLOT) is a useful way of depicting an interaction. Each line shows the performance profile of one of the Drug groups across the two Body State conditions. When the lines CROSS, CONVERGE or DIVERGE, an interaction is indicated. When the lines are PARALLEL, there is no interaction.

A more complex example

The two-way ANOVA In the two-way ANOVA, there are THREE F tests: There is a test for a main effect of the Drug factor. There is a test for a main effect of the Body State factor. There is a test for the presence of an interaction.

The three between groups mean squares In the one-way ANOVA, there is just ONE between groups mean square; but in the two-way ANOVA, there are THREE: A between-groups mean square for the Bodily State factor is calculated from the marginal ROW means. A between groups mean square for the Drug factor is calculated from the marginal COLUMN means. The interaction mean square is calculated from the values of the CELL MEANS, from which main effects have been removed.

The error term The denominator of an F-ratio is known as the ERROR TERM. The error mean square (MSwithin groups or MSwithin ) is calculated by averaging the variances of the scores within the cells of the table. In the two-way ANOVA, the three F-tests are made by dividing the main effect and interaction mean squares by the same error term.

The df of MSwithin As in the one-way ANOVA, the degrees of freedom of MSwithin is the sum of the degrees of freedom of the separate variance estimates that are being averaged. In this example, four variance estimates are being averaged, each with df = 3. The MSwithin therefore has df = 12.

Testing for a main effect of Body State

Testing for a main effect of Drug

Testing for an interaction

The parameters of the three F distributions The distribution of F has TWO parameters: The degrees of freedom of the MS of the factor (or interaction) being tested. The degrees of freedom of the MSwithin . In this particular example, all three F tests refer to the same F distribution but, more usually, the three between groups mean squares will have different degrees of freedom.

Rules for df For main effect sources, df = (number of conditions – 1). For interactions, df = (degrees of freedom of Factor 1) × (degrees of freedom of Factor 2): just multiply the degrees of freedom of the two factor sources together. In the present example, all three between groups mean squares have ONE degree of freedom.

The two-way ANOVA summary table

Observations There is no main effect of Drug. There is a significant main effect of Body State. There is a significant interaction between Body State and Drug.

Observations … Each F ratio is obtained by dividing the MS for that row by MSwithin : e.g. for the State factor, F = 156.25/17.4583 = 8.95. The df(within) = 12, the sum of the df for the four cell variances, the mean of which is MSwithin .

Observations … Notice that SStotal = SSstate + SSDrug + SSinteraction dftotal = dfState + dfDrug + dfinteraction

A more complex example

Two-way ANOVA with SPSS Give your factors informative labels. Assign clear Value Labels. You will need TWO grouping variables. Avoid clutter in Data View by resetting the Decimals to zero.

Appearance of Data View

Choosing the two-way ANOVA First, General Linear Model. Choose Univariate, because there is just one dependent variable. Multivariate tests are used when there are two or more dependent variables.

Fixed and random factors A FIXED factor is one whose levels are chosen systematically, rather than at random. A RANDOM factor is one whose levels are chosen at random. Random factors are rare in experimental psychology. But they occur in applied areas, such as health psychology.

The Univariate dialog box

The Options dialog As well as the results of the F tests, we shall require statistics such as the cell and marginal means to show us what happened in the experiment. Activate the Descriptive Statistics button, select all the variables in the left panel and transfer them to the Display Means for panel on the right.

The two-way ANOVA summary table The terms Intercept, Corrected Model and Total refer to the regression method used to implement the ANOVA. The ‘Corrected Total’ (766) is the total that we want. In the Output window, edit out the superfluous information to produce a more readable table.

The simplified table The F-tests confirm our expectations. The Drug factor has no main effect. Body State has a main effect. There is indeed an interaction.

Reporting the findings “The patterns apparent in Table xxxx are confirmed by formal statistical tests.” With an alpha-level of .05, there was no significant effect of Drug: F(1, 12) = .01; p = .91. There was a significant effect of Body State: F(1, 12) = 8.95; p = .01. There was a significant interaction: F(1, 12) = 22.92; p < .01.

A multiple-choice example

Third example