1. 2 2 k r Factorial Designs with Replications r replications of 2 k Experiments –2 k r observations. –Allows estimation of experimental errors Model:

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 10 The Analysis of Variance.
Advertisements

Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 9 Inferences Based on Two Samples.
Topic 12: Multiple Linear Regression
Copyright 2004 David J. Lilja1 Comparing Two Alternatives Use confidence intervals for Before-and-after comparisons Noncorresponding measurements.
Hypothesis Testing Steps in Hypothesis Testing:
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
Regression Analysis Module 3. Regression Regression is the attempt to explain the variation in a dependent variable using the variation in independent.
Linear regression models
Simple Linear Regression
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
Business 205. Review Sampling Continuous Random Variables Central Limit Theorem Z-test.
Chapter 10 Simple Regression.
9. SIMPLE LINEAR REGESSION AND CORRELATION
Statistics CSE 807.
SIMPLE LINEAR REGRESSION
Chapter Topics Types of Regression Models
Chapter 11 Multiple Regression.
Simple Linear Regression Analysis
Chapter 2 Simple Comparative Experiments
Experimental Evaluation
Simple Linear Regression and Correlation
Chap 10-1 Analysis of Variance. Chap 10-2 Overview Analysis of Variance (ANOVA) F-test Tukey- Kramer test One-Way ANOVA Two-Way ANOVA Interaction Effects.
Lecture 5 Correlation and Regression
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
One-Factor Experiments Andy Wang CIS 5930 Computer Systems Performance Analysis.
CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.
Simple Linear Regression Models
© 1998, Geoff Kuenning General 2 k Factorial Designs Used to explain the effects of k factors, each with two alternatives or levels 2 2 factorial designs.
© 2002 Prentice-Hall, Inc.Chap 9-1 Statistics for Managers Using Microsoft Excel 3 rd Edition Chapter 9 Analysis of Variance.
CHAPTER 14 MULTIPLE REGRESSION
1 1 Slide Simple Linear Regression Coefficient of Determination Chapter 14 BA 303 – Spring 2011.
CHAPTER 12 Analysis of Variance Tests
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Experimental Design and Analysis of Variance Chapter 10.
Chapter 11 Linear Regression Straight Lines, Least-Squares and More Chapter 11A Can you pick out the straight lines and find the least-square?
1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model.
1.State your research hypothesis in the form of a relation between two variables. 2. Find a statistic to summarize your sample data and convert the above.
Manijeh Keshtgary Chapter 13.  How to report the performance as a single number? Is specifying the mean the correct way?  How to report the variability.
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005 Dr. John Lipp Copyright © Dr. John Lipp.
Learning Objectives Copyright © 2002 South-Western/Thomson Learning Statistical Testing of Differences CHAPTER fifteen.
CPE 619 Experimental Design Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama in Huntsville.
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.
Inferential Statistics. The Logic of Inferential Statistics Makes inferences about a population from a sample Makes inferences about a population from.
CPE 619 One Factor Experiments Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama in.
CPE 619 Comparing Systems Using Sample Data Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of.
Experiment Design Overview Number of factors 1 2 k levels 2:min/max n - cat num regression models2k2k repl interactions & errors 2 k-p weak interactions.
Chapter 10 The t Test for Two Independent Samples
Introducing Communication Research 2e © 2014 SAGE Publications Chapter Seven Generalizing From Research Results: Inferential Statistics.
© Copyright McGraw-Hill 2004
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Chapter 16 Multiple Regression and Correlation
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Correlation. u Definition u Formula Positive Correlation r =
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Multiple Regression Chapter 14.
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
Chapter 11 Analysis of Variance
Chapter 20 Linear and Multiple Regression
Two-Way Analysis of Variance Chapter 11.
Factorial Experiments
Chapter 4. Inference about Process Quality
i) Two way ANOVA without replication
Two-Factor Full Factorial Designs
Quantitative Methods Simple Regression.
Chapter 9 Hypothesis Testing.
Chapter 11 Analysis of Variance
SIMPLE LINEAR REGRESSION
Simple Linear Regression
Replicated Binary Designs
One-Factor Experiments
Statistical Inference for the Mean: t-test
Presentation transcript:

1

2 2 k r Factorial Designs with Replications r replications of 2 k Experiments –2 k r observations. –Allows estimation of experimental errors Model: y = q 0 + q A x A + q B x B +q AB x A x B +e e=Experimental error

3 Computation of Effects Simply use means of r measurements Effects: q 0 = 41, q A = 21.5, q B = 9.5, q AB = 5

4 Estimation of Experimental Errors Estimated Response: y i = q 0 + q A x Ai + q B x Bi +q AB x Ai x Bi Experimental Error = Estimated-Measured e ij = y ij - y i = y ij - q 0 - q A x Ai - q B x Bi - q AB x Ai x Bi Sum of Squared Errors: SSE = ^ ^

5 Experimental Errors: Example Estimated Response: y 1 = q 0 - q A - q B + q AB = = 15 Experimental errors: e 11 = y 11 - y 1 = = 0 ^ ^ Effect EstimatedMeasured SSE = (-3) 2 + (-3) = 102 ^

6 Allocation of Variation Total variation or total sum of squares: SST = SST = SSA + SSB + SSAB + SSE

7 Derivation Model: Since x’s, their products, and all errors add to zero

8 Derivation (cont’d) Mean response:Squaring both sides of the model and ignoring cross product terms: SSY = SS0 + SSA + SSB + SSAB + SSE

9 Derivation (cont’d) Total Variation: One way to compute SSE:

10 Example: Memory-Cache Study

11 Example: Memory-Cache Study(cont’d) SSA + SSB + SSAB + SSE = = 7032 = SST Factor A explains 5547/7032 or 78.88% Factor B explains 15.40% Interaction AB explains 4.27% 1.45% is unexplained and is attributed to errors.

12 Review: Confidence Interval for the Mean Problem: How to get a single estimate of the population mean from k sample estimates? Answer: Get probabilistic bounds. Eg., 2 bounds, C 1 & C 2   There is a high probability, 1- , that the mean is in the interval (C 1, C 2 ): P r {C 1    C 2 } = 1 -  Confidence interval  (C 1, C 2 )   Significance Level 100 (1-  )  Confidence Level 1-   Confidence Coefficient.

13 Confidence Interval for the Mean (cont’d) Note: Confidence Level is traditionally expressed as a percentage (near 100%); whereas, significance level , is expressed as a fraction & is typically near zero; e.g., 0.05 or 0.01.

14 Confidence Interval for the Mean (cont’d) Example: Given sample with: mean = x = 3.90 SD = s = 0.95 n = 32 A 90 % CI for the mean = (1.645)(0.95)/ = (3.62, 4.17), used the central limit theorem. Note: A 90 % CI => We can state with 90 % confidence that the population mean is between 3.62 & The chance of error in this statement is 10 %

15 Testing for a Zero Mean Difference in processor times of two different implementations of the same algorithms was measured on 7 similar workloads. The differences are: {1.5, 2.6, -1.8, 1.3, -0.5, 1.7, 2.4} Can we say with 99 % confidence that one implementation is superior to the other

16 Testing for a Zero Mean (cont’d) Sample size = n = 7 mean = x = 1.03 sample variance = s 2 = 2.57 sample deviation = s = 1.60 CI = 1.03  t x 1.60/ = 1.03  0.605t 100 (1-  ) = 99,  = 0.01, 1-  /2 = From Table, the t value at six degrees of freedom is: t [0.995; 6] = & the 99% CI = (-1.21, 3.27). Since the CI includes zero, we can not say with 99% confidence that the mean difference is significantly different from Zero.

17 Type I & Type II Errors In testing a NULL, hypothesis, the level of significance is the probability of rejecting a true hypothesis. HYPOTHESIS Actually TrueActually False Correct  Error (Type II) Correct  Error (Type I) To Accept To Reject DECISIONDECISION Note: The letters  &  denote the probability related to these errors

18 Since q 0 = Linear combination of normal variables => q 0 is normal with variance Variance of errors: Effects are random variables. Errors ~ N(0,σ e ) => y ~ N( y.., σ e ) Confidence Intervals For Effects

19 Confidence Intervals For Effects (cont’d) Denominator = 2 2 (r - 1) = # of independent terms in SSE => SSE has 2 2 (r - 1) degrees of freedom. Estimated variance of q 0 : Similarly, Confidence intervals (CI) for the effects: CI does not include a zero => significant

20 Example For Memory-cache study: Standard deviation of errors: Standard deviation of effects: For 90% Confidence :

21 Example (cont’d) Confidence intervals: No zero crossing => All effects are significant.

22 Confidence Intervals for Contrasts Contrast  Linear combination with  coefficients = 0 Variance of  h i q i : For 100 ( 1 -  ) % confidence interval, use

23 u = q A + q B -2q AB Coefficients = 0,1,1, and -2 => Contrast Mean u = x 5 = 21 Variance Example: Memory-cache study Standard deviation t [0.95;8] = % Confidence interval for u : (16.31, 25.69)

24 Mean response y : y = q 0 + q A x A + q B x B + q AB x A x B The standard deviation of the mean of m response: CI for Predicted Response ^ ^ n eff = Effective deg of freedom = Total number of runs 1 + Sum of DFs of params used in y ^

25 CI for Predicted Response (cont’d) 100 ( 1 -  ) % confidence interval: A single run (m =  1) : Population mean

26 Example: Memory-cache Study For x A = -1 and x B = -1: A single confirmation experiment: y 1 = q 0 - q A - q B + q AB = = 15 Standard deviation of the prediction: ^ Using t [0.95;8] =1.86, the 90% confidence interval is:

27 Example: Memory-cache Study (cont’d) Mean response for 5 experiments in future: The 90% confidence interval is: Mean response for a large number of experiments in future: The 90% confidence interval is:

28 Example: Memory-cache Study (cont’d) Current mean response: Not for future. (Use the formula for contrasts): 90% confidence interval: Notice: Confidence intervals become narrower.

29 Assumptions 1. Errors are statistically independent. 2. Errors are additive. 3. Errors are normally distributed 4. Errors have a constant standard deviation  e. 5. Effects of factors are additive. => observations are independent and normally distributed with constant variance.