Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA

Slides:



Advertisements
Similar presentations
The t Test for Two Independent Samples
Advertisements

Introductory Mathematics & Statistics for Business
Summary of Convergence Tests for Series and Solved Problems
Overview of Lecture Parametric vs Non-Parametric Statistical Tests.
C82MST Statistical Methods 2 - Lecture 2 1 Overview of Lecture Variability and Averages The Normal Distribution Comparing Population Variances Experimental.
Overview of Lecture Factorial Designs Experimental Design Names
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
Multiple-choice question
1 Contact details Colin Gray Room S16 (occasionally) address: Telephone: (27) 2233 Dont hesitate to get in touch.
Chapter 7 Sampling and Sampling Distributions
Chapter 10: The t Test For Two Independent Samples
Chapter 13: Chi-Square Test
Chi-Square and Analysis of Variance (ANOVA)
Hypothesis Tests: Two Independent Samples
Comparing several means: ANOVA (GLM 1)
Chapter 15 ANOVA.
Chapter 15: Two-Factor Analysis of Variance
Analysis of Variance (ANOVA)
Please enter data on page 477 in your calculator.
Rational Functions and Models
Statistical Inferences Based on Two Samples
© The McGraw-Hill Companies, Inc., Chapter 10 Testing the Difference between Means and Variances.
© The McGraw-Hill Companies, Inc., Chapter 12 Chi-Square.
Chapter Thirteen The One-Way Analysis of Variance.
Chapter 8 Estimation Understandable Statistics Ninth Edition
PSSA Preparation.
Chapter 11: The t Test for Two Related Samples
Experimental Design and Analysis of Variance
1 Chapter 20: Statistical Tests for Ordinal Data.
Simple Linear Regression Analysis
Multiple Regression and Model Building
Chapter 10 Analysis of Variance (ANOVA) Part III: Additional Hypothesis Tests Renee R. Ha, Ph.D. James C. Ha, Ph.D Integrative Statistics for the Social.
C82MST Statistical Methods 2 - Lecture 4 1 Overview of Lecture Last Week Per comparison and familywise error Post hoc comparisons Testing the assumptions.
One-Way Between Subjects ANOVA. Overview Purpose How is the Variance Analyzed? Assumptions Effect Size.
1 1 Slide © 2009, Econ-2030 Applied Statistics-Dr Tadesse Chapter 10: Comparisons Involving Means n Introduction to Analysis of Variance n Analysis of.
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Experimental Design & Analysis
Analysis of Variance: Inferences about 2 or More Means
PSY 307 – Statistics for the Behavioral Sciences
Lecture 9: One Way ANOVA Between Subjects
Statistics for the Social Sciences
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Introduction to Analysis of Variance (ANOVA)
1 Chapter 13: Introduction to Analysis of Variance.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 14 Analysis.
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Chapter 12: Introduction to Analysis of Variance
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Copyright © 2004 Pearson Education, Inc.
Testing Hypotheses about Differences among Several Means.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
INTRODUCTION TO ANALYSIS OF VARIANCE (ANOVA). COURSE CONTENT WHAT IS ANOVA DIFFERENT TYPES OF ANOVA ANOVA THEORY WORKED EXAMPLE IN EXCEL –GENERATING THE.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Chapter 13 Repeated-Measures and Two-Factor Analysis of Variance
Econ 3790: Business and Economic Statistics Instructor: Yogesh Uppal
Statistics for Political Science Levin and Fox Chapter Seven
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
Formula for Linear Regression y = bx + a Y variable plotted on vertical axis. X variable plotted on horizontal axis. Slope or the change in y for every.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
Chapter 12 Introduction to Analysis of Variance
Econ 3790: Business and Economic Statistics
One-Way Analysis of Variance
Chapter 13 Group Differences
Statistics for the Social Sciences
Chapter 10 – Part II Analysis of Variance
Presentation transcript:

Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA Basic Ratios Sums of Squares Mean Squares Degrees of Freedom F-Ratio Evaluating the F-Ratio Analytical Comparisons

Partitioning the Deviations 0 2 4 6 8 10 AS 25 A 2 T - Within Subjects deviation Between Subjects

Evaluating the Null Hypothesis If we consider the ratio of the between groups variability and the within groups variability Then we have

Evaluating the Null Hypothesis If the null hypothesis is true then the treatment effect is equal to zero: If the null hypothesis is false then the treatment effect is greater than zero:

ANOVA Analysis of variance uses the ratio of two sources of variability to test the null hypothesis Between group variability estimates both experimental error and treatment effects Within subjects variability estimates experimental error

From Deviations to Variances In order to evaluate the null hypothesis, it is necessary to transform the between group and within-group deviations into more useful quantities, namely, variances. For this reason the statistical analysis involving the comparison of variances reflecting different sources of variability is called the analysis of variance. For the purposes of analysis of variance a variance is defined as follows

The Sums of Squares From the basic deviations A similar relationship holds for the sum of squares In other words

IV: Lecturer, DV: Exam Score Example Data IV: Lecturer, DV: Exam Score

Basic Ratios: A Basic Ratio is defined as: The numerator term for any basic ratio involves two steps: the initial squaring of a set of quantities summing the squared quantities if more than one is present. The denominator term for each ratio is the number of items that contribute to the sum or score. Basic Ratios make the calculation of the sum of squares relatively simple

The Notation for Basic Ratios To emphasise the critical nature of these basic ratios, we will use a distinctive symbol to designate basic ratios and to distinguish among them

Calculating the Basic Ratios Starting with the basic score, AS, and substituting into the formula for basic ratios: The second basic ratio involves the sums of the treatment conditions. The final basic ratio we require involves the grand sum.

Calculating the Sums of Squares The sums of squares can be calculated by combining these basic ratios. Total Sum of Squares Between Group Sum of Squares Within Group Sum of Squares

Variance Estimates: Mean Squares The ratio we are interested in is the ratio of the between groups variability and the within groups variability In this context, the variability is defined by the equation. where SS refers to the component sums of squares and df represent the degrees of freedom associated with the SS.

Degrees of Freedom The degrees of freedom associated with a sum of squares correspond to the number of scores with independent information which enter into the calculation of the sum of squares. Degrees of freedom are the number of observations that are free to vary when we know something about those observations.

Degrees of Freedom The Between Group Degrees of Freedom The Within Group Degrees of Freedom The Total Degrees of Freedom

Mean Squares The Between Group Mean Square The Within Group Mean Square

The F-Ratio The F-Ratio is defined as

The Anova Summary Table The results of the Anova are usually displayed by computers programs in a summary table:

Testing the Null Hypothesis In order to decide whether or not the null hypothesis is rejected we need to find out what value of F is necessary to reject the null hypothesis. There is a simple rule for this. Reject H0 when Fobserved> Fcritical otherwise do not reject H0 We obtain a value for Fcritical by looking it up in the F tables.

The Critical F-Value To find the critical value Take the degrees of freedom for the effect (A) and look along the horizontal axis of the F table. Take the degrees of freedom for the error term (S/A) and look down the vertical axis of the F table. The place were the column for the degrees of freedom of the effect A meets the row for the degrees of freedom of the error (S/A) is the critical value of F. For these data Fcritical =3.89 so we can reject the null hypothesis

The Omnibus F The F-ratio includes information about all the levels of the independent variable that we have manipulated. The F-ratio for an overall difference between the means as reported in the ANOVA summary table is known as the Omnibus F-ratio. The best the Omnibus F-ratio can tell us is that there are differences between the means. It cannot tell us that what those differences are.

Analytical Comparisons With a nonsignificant omnibus F we are prepared to assert that there are no observed differences among the means. We can stop the analysis there. A significant omnibus F demands further analysis of the data. Which differences between the means are real and which are not?

Analytical Comparisons There are two basic approaches to solving the problem. Before we set out to collect the data, we could have made specific predictions about the direction of the effects In this case we can use a technique known as planned (a priori) comparisons. We might have designed an experiment where we couldn't be precise enough to say what the differences would be. In this case we use post hoc (after the event) comparisons.

Planned (A Priori) Comparisons For example, assume the three levels of the independent variable are lecturer style: A1: Lectures with worksheets. A2: Lectures only. A3: No lectures, only worksheets. From previous research, we anticipate that A1 > A2, A1 > A3 (we are making no predictions about A2 vs A3). Does the data support this?

Differences as the sum of weighted means Let us adopt the symbol y to represent the difference we are interested in: Y = mA1 – mA2 We can rewrite this as: Y = (1) mA1 +(–1) mA2 Including all the means in the experiment: Y = (1) mA1 +(–1) mA2 +(0) mA3

Sum of Squares for a planned comparison Planned comparisons are based on the calculation of an F-ratio. This requires us to calculate the variability inherent in the comparison We calculate a sum of squares associated with the comparison. This is given by:

Sum of Squares of a planned comparison The sum of squares of the comparison is given by

Testing the planned comparison A F ratio is calculated to test the comparison. For this a mean square is required. The mean square for the comparison is calculated by: All planned comparisons have on 1 degree of freedom. The F-ratio is calculated by:

Evaluating the planned comparison’s null hypothesis Critical F's for comparisons use the degrees of freedom for the numerator and the denominator of the F-ratio. There are 1 and 12 degrees of freedom for this comparison. Fcritical(1, 12) for p≤0.05=4.75 Given that Fobservedl=14.29, we can reject the null hypothesis and conclude that A1 (lectures with worksheets) leads to better scores than A2 (lectures only).

Next Week Post hoc comparisons Testing the assumptions that underlie ANOVA Two computer programs for analysing a one-way between groups analysis of variance