1 Pertemuan 19 Analisis Ragam (ANOVA)-1 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 10 The Analysis of Variance.
Advertisements

Analisis Varians/Ragam Klasifikasi Dua Arah Pertemuan 18 Matakuliah: L0104 / Statistika Psikologi Tahun : 2008.
1 Pertemuan 11 Matakuliah: I0014 / Biostatistika Tahun: 2005 Versi: V1 / R1 Pengujian Hipotesis (I)
Chapter 12 ANALYSIS OF VARIANCE.
Analysis of Variance (ANOVA) ANOVA can be used to test for the equality of three or more population means We want to use the sample results to test the.
1 Chapter 10 Comparisons Involving Means  1 =  2 ? ANOVA Estimation of the Difference between the Means of Two Populations: Independent Samples Hypothesis.
Chapter 10 Comparisons Involving Means
Analysis and Interpretation Inferential Statistics ANOVA
1 1 Slide © 2009, Econ-2030 Applied Statistics-Dr Tadesse Chapter 10: Comparisons Involving Means n Introduction to Analysis of Variance n Analysis of.
1 Pertemuan 18 Pembandingan Dua Populasi-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
Statistics Are Fun! Analysis of Variance
1 Pertemuan 12 Sampling dan Sebaran Sampling-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
1 Pertemuan 13 Analisis Ragam (Varians) - 2 Matakuliah: I0272 – Statistik Probabilitas Tahun: 2005 Versi: Revisi.
1 Pertemuan 10 Analisis Ragam (Varians) - 1 Matakuliah: I0262 – Statistik Probabilitas Tahun: 2007 Versi: Revisi.
ANOVA Single Factor Models Single Factor Models. ANOVA ANOVA (ANalysis Of VAriance) is a natural extension used to compare the means more than 2 populations.
1 Pertemuan 08 Pengujian Hipotesis 1 Matakuliah: I0272 – Statistik Probabilitas Tahun: 2005 Versi: Revisi.
1 Pertemuan 25 Metode Non Parametrik-1 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
1 Pertemuan 11 Sampling dan Sebaran Sampling-1 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
Chapter 12: Analysis of Variance
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
CHAPTER 3 Analysis of Variance (ANOVA) PART 1
1 1 Slide © 2009 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 14 Analysis.
12-1 Chapter Twelve McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved.
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 13 Experimental Design and Analysis of Variance nIntroduction to Experimental Design.
1 1 Slide Analysis of Variance Chapter 13 BA 303.
Analysis of Variance ( ANOVA )
12-1 Chapter Twelve McGraw-Hill/Irwin © 2006 The McGraw-Hill Companies, Inc., All Rights Reserved.
Analysis of Variance (ANOVA) and Multivariate Analysis of Variance (MANOVA) Session 6.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Experimental Design and Analysis of Variance Chapter 10.
Basic concept Measures of central tendency Measures of central tendency Measures of dispersion & variability.
Chapter 9 Analysis of Variance COMPLETE BUSINESS STATISTICSby AMIR D. ACZEL & JAYAVEL SOUNDERPANDIAN 7th edition. Prepared by Lloyd Jaisingh, Morehead.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
TOPIC 11 Analysis of Variance. Draw Sample Populations μ 1 = μ 2 = μ 3 = μ 4 = ….. μ n Evidence to accept/reject our claim Sample mean each group, grand.
Testing Hypotheses about Differences among Several Means.
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics S eventh Edition By Brase and Brase Prepared by: Lynn Smith.
12-1 Chapter Twelve McGraw-Hill/Irwin © 2006 The McGraw-Hill Companies, Inc., All Rights Reserved.
Perbandingan dua populasi Pertemuan 8 Matakuliah: D Statistika dan Aplikasinya Tahun: 2010.
Econ 3790: Business and Economic Statistics Instructor: Yogesh Uppal
Econ 3790: Business and Economic Statistics Instructor: Yogesh Uppal
CHAPTER 12 ANALYSIS OF VARIANCE Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved.
The Analysis of Variance ANOVA
1 Pertemuan 22 Regresi dan Korelasi Linier Sederhana-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
1/54 Statistics Analysis of Variance. 2/54 Statistics in practice Introduction to Analysis of Variance Analysis of Variance: Testing for the Equality.
1 Pertemuan 17 Pembandingan Dua Populasi-1 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
CHAPTER 3 Analysis of Variance (ANOVA) PART 2 =TWO- WAY ANOVA WITHOUT REPLICATION.
Analysis of Variance. The F Distribution Uses of the F Distribution – test whether two samples are from populations having equal variances – to compare.
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
1 Pertemuan 26 Metode Non Parametrik-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
DSCI 346 Yamasaki Lecture 4 ANalysis Of Variance.
1 Pertemuan 19 Analisis Varians Klasifikasi Satu Arah Matakuliah: I Statistika Tahun: 2008 Versi: Revisi.
Rancangan Acak Lengkap ( Analisis Varians Klasifikasi Satu Arah) Pertemuan 16 Matakuliah: I0184 – Teori Statistika II Tahun: 2009.
Chapter 13 Analysis of Variance (ANOVA). ANOVA can be used to test for differences between three or more means. The hypotheses for an ANOVA are always:
Lecture notes 13: ANOVA (a.k.a. Analysis of Variance)
CHAPTER 3 Analysis of Variance (ANOVA) PART 1
Pertemuan 17 Analisis Varians Klasifikasi Satu Arah
Copyright © 2008 by Hawkes Learning Systems/Quant Systems, Inc.
CHAPTER 3 Analysis of Variance (ANOVA) PART 1
Pertemuan 17 Pengujian Hipotesis
i) Two way ANOVA without replication
CHAPTER 3 Analysis of Variance (ANOVA)
Statistics Analysis of Variance.
CHAPTER 12 ANALYSIS OF VARIANCE
Statistics for Business and Economics (13e)
Econ 3790: Business and Economic Statistics
One-Way Analysis of Variance
Chapter 10 – Part II Analysis of Variance
Presentation transcript:

1 Pertemuan 19 Analisis Ragam (ANOVA)-1 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1

2 Learning Outcomes Pada akhir pertemuan ini, diharapkan mahasiswa akan mampu : Menghubungkan dan membandingkan dua atau lebih ragam (variance)

3 Outline Materi Uji Hipotesis menggunakan ANOVA Teori dan Perhitungan ANOVA

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Using Statistics The Hypothesis Test of Analysis of Variance The Theory and Computations of ANOVA The ANOVA Table and Examples Further Analysis Models, Factors, and Designs Two-Way Analysis of Variance Blocking Designs Summary and Review of Terms Analysis of Variance 9

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., ANOVA (ANalysis Of VAriance) is a statistical method for determining the existence of differences among several population means. ANOVA is designed to detect differences among means from populations subject to different treatments ANOVA is a joint test The equality of several population means is tested simultaneously or jointly. ANOVA tests for the equality of several population means by looking at two estimators of the population variance (hence, analysis of variance). ANOVA (ANalysis Of VAriance) is a statistical method for determining the existence of differences among several population means. ANOVA is designed to detect differences among means from populations subject to different treatments ANOVA is a joint test The equality of several population means is tested simultaneously or jointly. ANOVA tests for the equality of several population means by looking at two estimators of the population variance (hence, analysis of variance). 9-1 ANOVA: Using Statistics

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., In an analysis of variance: We have r independent random samples, each one corresponding to a population subject to a different treatment. We have: n = n 1 + n 2 + n n r total observations. r sample means: x 1, x 2, x 3,..., x r – These r sample means can be used to calculate an estimator of the population variance. If the population means are equal, we expect the variance among the sample means to be small. r sample variances: s 1 2, s 2 2, s 3 2,...,s r 2 – These sample variances can be used to find a pooled estimator of the population variance. In an analysis of variance: We have r independent random samples, each one corresponding to a population subject to a different treatment. We have: n = n 1 + n 2 + n n r total observations. r sample means: x 1, x 2, x 3,..., x r – These r sample means can be used to calculate an estimator of the population variance. If the population means are equal, we expect the variance among the sample means to be small. r sample variances: s 1 2, s 2 2, s 3 2,...,s r 2 – These sample variances can be used to find a pooled estimator of the population variance. 9-2 The Hypothesis Test of Analysis of Variance

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., We assume independent random sampling from each of the r populations We assume that the r populations under study: –are normally distributed, –with means  i that may or may not be equal, –but with equal variances,  i 2. We assume independent random sampling from each of the r populations We assume that the r populations under study: –are normally distributed, –with means  i that may or may not be equal, –but with equal variances,  i 2. 11 22 33  Population 1Population 2Population The Hypothesis Test of Analysis of Variance (continued): Assumptions

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., The test statistic of analysis of variance: F (r-1, n-r) = Estimate of variance based on means from r samples Estimate of variance based on all sample observations That is, the test statistic in an analysis of variance is based on the ratio of two estimators of a population variance, and is therefore based on the F distribution, with (r-1) degrees of freedom in the numerator and (n-r) degrees of freedom in the denominator. The test statistic of analysis of variance: F (r-1, n-r) = Estimate of variance based on means from r samples Estimate of variance based on all sample observations That is, the test statistic in an analysis of variance is based on the ratio of two estimators of a population variance, and is therefore based on the F distribution, with (r-1) degrees of freedom in the numerator and (n-r) degrees of freedom in the denominator. The hypothesis test of analysis of variance: H 0 :  1 =  2 =  3 =  4 =...  r H 1 : Not all  i (i = 1,..., r) are equal The hypothesis test of analysis of variance: H 0 :  1 =  2 =  3 =  4 =...  r H 1 : Not all  i (i = 1,..., r) are equal 9-2 The Hypothesis Test of Analysis of Variance (continued)

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., x x x When the null hypothesis is true: We would expect the sample means to be nearly equal, as in this illustration. And we would expect the variation among the sample means (between sample) to be small, relative to the variation found around the individual sample means (within sample). If the null hypothesis is true, the numerator in the test statistic is expected to be small, relative to the denominator: F (r-1, n-r) = Estimate of variance based on means from r samples Estimate of variance based on all sample observations When the null hypothesis is true: We would expect the sample means to be nearly equal, as in this illustration. And we would expect the variation among the sample means (between sample) to be small, relative to the variation found around the individual sample means (within sample). If the null hypothesis is true, the numerator in the test statistic is expected to be small, relative to the denominator: F (r-1, n-r) = Estimate of variance based on means from r samples Estimate of variance based on all sample observations     When the Null Hypothesis Is True

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., xxx When the null hypothesis is false: is equal to but not to, is equal to but not to, or,, and are all unequal.           In any of these situations, we would not expect the sample means to all be nearly equal. We would expect the variation among the sample means (between sample) to be large, relative to the variation around the individual sample means (within sample). If the null hypothesis is false, the numerator in the test statistic is expected to be large, relative to the denominator: F (r-1, n-r) = Estimate of variance based on means from r samples Estimate of variance based on all sample observations In any of these situations, we would not expect the sample means to all be nearly equal. We would expect the variation among the sample means (between sample) to be large, relative to the variation around the individual sample means (within sample). If the null hypothesis is false, the numerator in the test statistic is expected to be large, relative to the denominator: F (r-1, n-r) = Estimate of variance based on means from r samples Estimate of variance based on all sample observations When the Null Hypothesis Is False

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Suppose we have 4 populations, from each of which we draw an independent random sample, with n 1 + n 2 + n 3 + n 4 = 54. Then our test statistic is: F (4-1, 54-4) = F (3,50) = Estimate of variance based on means from 4 samples Estimate of variance based on all 54 sample observations Suppose we have 4 populations, from each of which we draw an independent random sample, with n 1 + n 2 + n 3 + n 4 = 54. Then our test statistic is: F (4-1, 54-4) = F (3,50) = Estimate of variance based on means from 4 samples Estimate of variance based on all 54 sample observations F (3,50) f ( F ) F Distribution with 3 and 50 Degrees of Freedom 2.79  =0.05 The nonrejection region (for  =0.05)in this instance is F  2.79, and the rejection region is F > If the test statistic is less than 2.79 we would not reject the null hypothesis, and we would conclude the 4 population means are equal. If the test statistic is greater than 2.79, we would reject the null hypothesis and conclude that the four population means are not equal. The ANOVA Test Statistic for r = 4 Populations and n = 54 Total Sample Observations

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Randomly chosen groups of customers were served different types of coffee and asked to rate the coffee on a scale of 0 to 100: 21 were served pure Brazilian coffee, 20 were served pure Colombian coffee, and 22 were served pure African-grown coffee. The resulting test statistic was F = 2.02 Randomly chosen groups of customers were served different types of coffee and asked to rate the coffee on a scale of 0 to 100: 21 were served pure Brazilian coffee, 20 were served pure Colombian coffee, and 22 were served pure African-grown coffee. The resulting test statistic was F = F f ( F ) F Distribution with 2 and 60 Degrees of Freedom  =0.05 Test Statistic=2.02 F (2,60) =3.15 Example 9-1

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., The grand mean, x, is the mean of all n = n 1 + n 2 + n n r observations in all r samples. 9-3 The Theory and the Computations of ANOVA: The Grand Mean

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Using the Grand Mean: Table 9-1 If the r population means are different (that is, at least two of the population means are not equal), then it is likely that the variation of the data points about their respective sample means (within sample variation) will be small relative to the variation of the r sample means about the grand mean (between sample variation). Distance from data point to its sample mean Distance from sample mean to grand mean 1050 x 3 =2 x 2 =11.5 x 1 =6 x=6.909 Treatment (j)Sample point(j)Value(x ij ) I=1 Triangle1 4 Triangle Mean of Triangles 6 I=2 Square1 10 Square2 11 Square3 12 Square4 13 Mean of Squares 11.5 I=3 Circle1 1 Circle Mean of Circles 2 Grand mean of all data points 6.909

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., We definean as the difference between a data point and its sample mean. Errors are denoted by, and we have: We definea as the deviation of a samplemean from the grand mean. Treatment deviations, tare givenby: i error deviation treatmentdeviation e, The ANOVA principle says: When the population means are not equal, the “average” error (within sample) is relatively small compared with the “average” treatment (between sample) deviation. The ANOVA principle says: When the population means are not equal, the “average” error (within sample) is relatively small compared with the “average” treatment (between sample) deviation. The Theory and Computations of ANOVA: Error Deviation and Treatment Deviation

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Consider data point x 24 =13 from table 9-1. The mean of sample 2 is 11.5, and the grand mean is 6.909, so: 1050 x 2 =11.5 x=6.909 x 24 =13 Total deviation: Tot 24 =x 24 -x=6.091 Treatment deviation: t 2 =x 2 -x=4.591 Error deviation: e 24 =x 24 -x 2 =1.5 The total deviation (Tot ij ) is the difference between a data point (x ij ) and the grand mean (x): Tot ij =x ij - x For any data point x ij : Tot = t + e That is: Total Deviation = Treatment Deviation + Error Deviation The total deviation (Tot ij ) is the difference between a data point (x ij ) and the grand mean (x): Tot ij =x ij - x For any data point x ij : Tot = t + e That is: Total Deviation = Treatment Deviation + Error Deviation The Theory and Computations of ANOVA: The Total Deviation

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., The Theory and Computations of ANOVA: Squared Deviations

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., The Sum of Squares Principle The total sum of squares (SST) is the sum of two terms: the sum of squares for treatment (SSTR) and the sum of squares for error (SSE). SST = SSTR + SSE The Sum of Squares Principle The total sum of squares (SST) is the sum of two terms: the sum of squares for treatment (SSTR) and the sum of squares for error (SSE). SST = SSTR + SSE The Theory and Computations of ANOVA: The Sum of Squares Principle

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., SST SSTRSSTE SST measures the total variation in the data set, the variation of all individual data points from the grand mean. SSTR measures the explained variation, the variation of individual sample means from the grand mean. It is that part of the variation that is possibly expected, or explained, because the data points are drawn from different populations. It’s the variation between groups of data points. SSE measures unexplained variation, the variation within each group that cannot be explained by possible differences between the groups. SST measures the total variation in the data set, the variation of all individual data points from the grand mean. SSTR measures the explained variation, the variation of individual sample means from the grand mean. It is that part of the variation that is possibly expected, or explained, because the data points are drawn from different populations. It’s the variation between groups of data points. SSE measures unexplained variation, the variation within each group that cannot be explained by possible differences between the groups. The Theory and Computations of ANOVA: Picturing The Sum of Squares Principle

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., The number of degrees of freedom associated with SST is (n - 1). n total observations in all r groups, less one degree of freedom lost with the calculation of the grand mean The number of degrees of freedom associated with SSTR is (r - 1). r sample means, less one degree of freedom lost with the calculation of the grand mean The number of degrees of freedom associated with SSE is (n-r). n total observations in all groups, less one degree of freedom lost with the calculation of the sample mean from each of r groups The degrees of freedom are additive in the same way as are the sums of squares: df(total) = df(treatment) + df(error) (n - 1) = (r - 1) + (n - r) The number of degrees of freedom associated with SST is (n - 1). n total observations in all r groups, less one degree of freedom lost with the calculation of the grand mean The number of degrees of freedom associated with SSTR is (r - 1). r sample means, less one degree of freedom lost with the calculation of the grand mean The number of degrees of freedom associated with SSE is (n-r). n total observations in all groups, less one degree of freedom lost with the calculation of the sample mean from each of r groups The degrees of freedom are additive in the same way as are the sums of squares: df(total) = df(treatment) + df(error) (n - 1) = (r - 1) + (n - r) The Theory and Computations of ANOVA: Degrees of Freedom

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Recall that the calculation of the sample variance involves the division of the sum of squared deviations from the sample mean by the number of degrees of freedom. This principle is applied as well to find the mean squared deviations within the analysis of variance. Mean square treatment (MSTR): Mean square error (MSE): Mean square total (MST): (Note that the additive properties of sums of squares do not extend to the mean squares. MST  MSTR + MSE. Recall that the calculation of the sample variance involves the division of the sum of squared deviations from the sample mean by the number of degrees of freedom. This principle is applied as well to find the mean squared deviations within the analysis of variance. Mean square treatment (MSTR): Mean square error (MSE): Mean square total (MST): (Note that the additive properties of sums of squares do not extend to the mean squares. MST  MSTR + MSE. The Theory and Computations of ANOVA: The Mean Squares

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., EMSE EMSTR n ii r i () and () () when thenull hypothesis is true > when thenull hypothesis is false where is the mean of population i and is the combined mean of allr populations.             That is, the expected mean square error (MSE) is simply the common population variance (remember the assumption of equal population variances), but the expected treatment sum of squares (MSTR) is the common population variance plus a term related to the variation of the individual population means around the grand population mean. If the null hypothesis is true so that the population means are all equal, the second term in the E(MSTR) formulation is zero, and E(MSTR) is equal to the common population variance. The Theory and Computations of ANOVA: The Expected Mean Squares

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., When the null hypothesis of ANOVA is true and all r population means are equal, MSTR and MSE are two independent, unbiased estimators of the common population variance  2. On the other hand, when the null hypothesis is false, then MSTR will tend to be larger than MSE. So the ratio of MSTR and MSE can be used as an indicator of the equality or inequality of the r population means. This ratio (MSTR/MSE) will tend to be near to 1 if the null hypothesis is true, and greater than 1 if the null hypothesis is false. The ANOVA test, finally, is a test of whether (MSTR/MSE) is equal to, or greater than, 1. On the other hand, when the null hypothesis is false, then MSTR will tend to be larger than MSE. So the ratio of MSTR and MSE can be used as an indicator of the equality or inequality of the r population means. This ratio (MSTR/MSE) will tend to be near to 1 if the null hypothesis is true, and greater than 1 if the null hypothesis is false. The ANOVA test, finally, is a test of whether (MSTR/MSE) is equal to, or greater than, 1. Expected Mean Squares and the ANOVA Principle

COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Under the assumptions of ANOVA, the ratio (MSTR/MSE) possess an F distribution with (r-1) degrees of freedom for the numerator and (n-r) degrees of freedom for the denominator when the null hypothesis is true. The test statistic in analysis of variance: (-,-) F MSTR MSE rnr1  The Theory and Computations of ANOVA: The F Statistic

25 Penutup Pembahsan materi dilanjutkan dengan Materi Pokok 20 (ANOVA-2)