Presentation is loading. Please wait.

Presentation is loading. Please wait.

LISA Short Course Series Basics of Design of Experiments Ana Maria Ortega-Villa Fall 2014 LISA: DOEFall 2014.

Similar presentations


Presentation on theme: "LISA Short Course Series Basics of Design of Experiments Ana Maria Ortega-Villa Fall 2014 LISA: DOEFall 2014."— Presentation transcript:

1 LISA Short Course Series Basics of Design of Experiments Ana Maria Ortega-Villa Fall 2014 LISA: DOEFall 2014

2 LISA: R Basics Fall 2013 LISA: DOEFall 2014 About me Home country Colombia. 5 th year PhD student in Statistics Ms. Statistics, Virginia Tech Ms. Operations research, Universidad de los Andes, Colombia. Instructor: STAT 4705 Probability and Statistics for Engineers. Contact:

3 Laboratory for Interdisciplinary Statistical Analysis Collaboration: Visit our website to request personalized statistical advice and assistance with: Designing Experiments Analyzing Data Interpreting Results Grant Proposals Software (R, SAS, JMP, Minitab...) LISA statistical collaborators aim to explain concepts in ways useful for your research. Great advice right now: Meet with LISA before collecting your data. All services are FREE for VT researchers. We assist with research—not class projects or homework. LISA helps VT researchers benefit from the use of Statistics LISA also offers: Educational Short Courses: Designed to help graduate students apply statistics in their research Walk-In Consulting: Available Monday-Friday from 1-3 PM in the Old Security Building (OSB) for questions <30 mins. See our website for additional times and locations.

4 1. Introduction to Design of Experiments 2. DOE main principles Randomization Replication Local control of error 3. Complete Randomized Design 4. Randomized Complete Block Design 5. Introduction to factorial Designs What are we doing? LISA: DOEFall 2014

5 Introduction to Design of Experiments LISA: DOEFall 2014

6 An experiment can be thought of as a test or series of tests in which we make controlled changes to the input variables of a process or a system, in order to determine how they change the output of interest. https://weakinteractions.files.wordpress.com/2009/08/s1e1.jpg?w=450 What is an Experiment? LISA: DOEFall 2014

7 MAXIMIZE: Probability of having a successful experiment. Information gain: the results and conclusions derived depend on the way information was collected. MINIMIZE Unwanted effects from other sources of variation. Cost of experiment if results are limited. Why do we design experiments? LISA: DOEFall 2014

8 Observational study: The researcher has little to no control over sources of variation and simply observes what is happening. The researcher can only determine information about how our inputs are related to the outputs… we cannot determine causation. Examples: Surveys Weather Patterns Stock market price etc. What would be an alternative? LISA: DOEFall 2014

9 The researcher identifies and controls sources of variation that significantly impact the measured response. The researcher can gather evidence for causation. Designed experiment LISA: DOEFall 2014 Correlation ≠ Causation

10 Sources of variation are anything that could cause an observation to be different from another observation. Two main types: Those that can be controlled and are of interest are called treatments or treatment factors. Those that can influence the experimental response but in which we are not directly interested are called nuisance factors. But what are sources of variation? LISA: DOEFall 2014

11 List all major and minor sources of variation before collecting the data, classifying them as either a treatment or a nuisance factor. We want our design to minimize the impact of minor sources of variation, and to be able to separate effects of nuisance factors from treatment factors We want the majority of the variability of the data to be explained by the treatment factors. Rule of Thumb LISA: DOEFall 2014

12 Example: Impact of Exercise Intensity on Resting Heart Rate LISA: DOEFall 2014 Suppose a researcher surveys a sample of individuals to obtain information about their intensity of exercise each week and their resting heart rate. Subject Reported Intensity of Exercise each week Resting Heart Rate … Can-Heart-Patients-Undertake-e jpg What type of study is this? Observational Study

13 How could we make it a designed expt? LISA: DOEFall 2014 The researcher finds a sample of individuals, enrolls groups in exercise programs of different intensity levels, and then measures their resting heart rate. Subject Intensity level of exercise each week Resting Heart Rate …

14 What are our sources of variation? LISA: DOEFall 2014 MajorMinor Treatment Nuisance Factor Exercise intensity Medication Use Air Temperature & Humidity Location of measurement Body Size Body Position

15 Minimum considerations: Response: Resting heart rate (beats per minute) Treatment: Exercise Program o Low intensity o Moderate intensity o High intensity Designing the experiment LISA: DOEFall 2014

16 Basic Design: 36 participants, 18 male and 18 female under the conditions listed previously. Every person is assigned to one of the three 8- week exercise programs. Resting heart rate is measured at the beginning and end of 8 weeks. Designing the experiment LISA: DOEFall 2014

17 An experimental unit (EU) is the “material” to which treatment factors are assigned. o For the resting heart rate example, the participants are the EU. o We want EUs to be as similar as possible, but that isn’t always realistic. A block is a group of EUs similar to each other, and different from other groups. o In the resting heart rate example, women are physiologically similar to each other and different from men. A blocking factor is the characteristic used to create the blocks. o In the resting heart rate example, gender is a blocking factor. Fundamentals of Design of Experiments LISA: DOEFall 2014

18 Three Basic Principles of Design of Experiments LISA: DOEFall 2014 Randomization

19 Randomization consists of randomly assigning: the experimental treatments to experimental units. the order in which the independent runs will be performed (when applicable). Purpose: Often we assume an independent, random distribution of observations and errors – randomization validates this assumption. Averages out the effects of extraneous/lurking variables. Reduces bias and accusations of bias. Randomization LISA: DOEFall 2014

20 The way you randomize depends on your experiment, what is important here is to remember there are two levels of randomization. 1. Assignment of treatments to experimental units 2. Order of the runs (when applicable). Randomization LISA: DOEFall 2014

21 1. Assignment of treatments to experimental units. 1. Order of the runs. Not applicable in this case since all participants are doing the experiment at the same time. Randomization RHR Example LISA: DOEFall 2014 ParticipantExercise Program 1High 2 3Low 4Intermediate 5Low 6High …….……

22 Three Basic Principles of Design of Experiments LISA: DOEFall 2014 Replication

23 Replication consists of independently repeating runs of each treatment. Purpose: Improves precision of effect estimation. Decreases Variance. Allows for estimation of experimental error. This error will later become a unit of measurement to determine whether observed differences are really statistically significant. Note: Try to have the same amount of replicates for each treatment assignment. # Replicates=# EUs/#Treatments Replication LISA: DOEFall 2014

24 Participants 1, 2 and 6 can be considered as replicates of High intensity exercise treatment. Replication in RHR Example LISA: DOEFall 2014 ParticipantExercise Program 1High 2 3Low 4Intermediate 5Low 6High …….……

25 What is pseudoreplication? Occurs when there is more than one observation per EU and they are treated as replicates. In our RHR example it would be like taking measurements in different locations (wrist, side of the neck and foot) of the same person and treating them as separate observations. Pseudoreplication LISA: DOEFall 2014

26 A way to deal with multiple measurements per EU is to average them over and work with the new value. Consequences: Underestimation of error Potentially exaggerate the true treatment differences Pseudoreplication LISA: DOEFall 2014

27 Three Basic Principles of Design of Experiments LISA: DOEFall 2014 Local Control of Error

28 Local control of error is taking any means of improving the accuracy of measuring treatment effects in the design. Purpose: Removes or minimizes sources of nuisance. Improves the precision with which comparisons among factors are made. Note: There are several ways of doing this. One could control as much as possible all the previously listed sources of variation. Often this is done by the use of blocking or more advanced designs such as ANCOVA. Local control of error LISA: DOEFall 2014

29 We will be monitoring the participant’s exercise program throughout the study (not relying on self- reporting). We will only consider participants that are not taking any medication that might alter their heart rate. We will take all measurements on the same location of the body: the wrist. We will take all measurements with the participant on the same position: standing. We will only accept participants with a body mass index within the normal range. We will measure all participants on the same day at the beginning and the end of the study. RHR Local control of error LISA: DOEFall 2014

30 Common Designs: LISA: DOEFall 2014 Completely Randomized Design (CRD)

31 The CRD is the simplest design. It assumes all EUs are similar and the only major sources of variation are the treatments. In this design all treatment-EU assignments are randomized for the specified number of treatment replications. If you are equally interested in comparisons of all treatments get as close as possible to equally replicating the treatments. (Balanced design). Complete Randomized Design (CRD) LISA: DOEFall 2014

32 Etching is a process in which unwanted material is removed from circuit wafers in order to obtain circuit patterns, electrical interconnects and areas in which diffusions or metal depositions are to be made. * Example from Montgomery (2009) CRD Example: Plasma Etching Experiment LISA: DOEFall 2014

33 Energy is supplied by a generator. Chemical mixture gas is is shot at a sample. Plasma is generated in gap between electrodes CRD Example: Etching Process simplified LISA: DOEFall 2014

34 An engineer is interested in investigating the relationship between the generator power setting and the etch rate for the tool. Response: Etch rate Treatment: Generator power setting (4 levels to consider) Experimental Unit: Circuit Wafer Possible sources of variation: Generator power setting Chemical mixture gas (the gases affect the plasma behavior) Size of the gap between the electrodes. CRD Example: Study LISA: DOEFall 2014

35 Replication We will consider 5 EUs for each treatment level (generator power setting) Randomization Since all EUs are considered to be identical, we will randomize the running order. Local control of error In order to minimize variability we will use the same chemical mixture (C 2 F 6 ) and size of gap (0.8 cm) for all runs of the experiment. CRD Example: Principles of DOE LISA: DOEFall 2014

36 CRD Example: Randomization Scheme LISA: DOEFall 2014 RunTreatment RunTreatment This run order was obtained using a random number generator.

37 We are interested in testing the equality of the treatment means: If we reject the null hypothesis, then this would mean there is a difference between at least two of the means, which translates to a significant different between the treatments. CRD Example: What is the question? LISA: DOEFall 2014

38 We want to enter the data such that each each response has its own row, with the corresponding treatment type. We then choose Analyze -> Fit Y by X. CRD Example: Analysis LISA: DOEFall 2014

39 We will choose Rate as the Y response and Treatment as the X factor. CRD Example: Analysis LISA: DOEFall 2014

40 From the red triangle: Display Options ->Boxplot Remarks: These box plots show that the etch rate increases as the power setting increases. From this graphical analysis we suspect: 1. Generator power settings affects the etch rate. 2. Higher power settings result in increased etch rate. CRD Example: Visual Analysis LISA: DOEFall 2014

41 From red triangle select means and ANOVA. ANOVA partitions total variability into three separate independent pieces: MSTrt: Variability due to treatment differences. MSE: Variability due to experimental error. If MSTrt>MSE then treatments likely have different effects. CRD Example: ANOVA Table LISA: DOEFall 2014

42 Red Triangle: Compare Means -> Tukey HSD At least two treatments are different, which ones? CRD Example: Contrasts LISA: DOEFall 2014

43 CRD has one overall randomization. Try to equally replicate all the treatments. Plot your data in a meaningful way to help visualize analysis. Use ANOVA to test for an overall difference. Look at specific contrasts of interest to better understand the relationship between treatments. CRD: Summary LISA: DOEFall 2014

44 Common Designs: LISA: DOEFall 2014 Randomized Complete Block Design (RCBD)

45 The RCBD is a design in which there are one or more nuisance factors that are known and controllable. This design systematically eliminates the effect of these nuisance factors on the statistical comparisons among treatments. The block size equals the number of treatments. Basic Idea: Compare treatments within blocks to account for the source of variation. Randomized Complete Block Design (RCBD) LISA: DOEFall 2014

46 Vascular grafts (artificial veins) are produced by extruding billets of polytetrafluoroethylene (PFTE) resin combined with a lubricant into tubes. Sometimes these tubes contain defects known as flicks. These defects are cause for rejection of the unit. The product developer suspects that the extrusion pressure affects the occurrence of flicks. An engineer suspects that there may be significant batch-to-batch variation from the resin. * Example from Montgomery (2009) RCBD Example: Vascular Graft Experiment LISA: DOEFall 2014

47 Response: Percentage of tubes that did not contain any flick. Treatment: Extrusion Pressure (4 levels) Block: Batch of resin (6 batches). RCBD Example: Study LISA: DOEFall 2014

48 Replication Each treatment (extrusion pressure) is replicated once in each block. Randomization The treatments (extrusion pressure) are randomized inside each block. Local control of error In order to minimize variability we will use Blocking and keeping all other possible controllable nuisance factors controlled. RCBD Example: Principles of DOE LISA: DOEFall 2014

49 We are interested in testing the equality of the treatment means: If we reject the null hypothesis, then this would mean there is a difference between at least two of the means. RCBD Example: What is the question? LISA: DOEFall 2014

50 We are interested in testing the equality of the treatment means: If we reject the null hypothesis, then this would mean there is a difference between at least two of the means. RCBD Example: What is the question? LISA: DOEFall 2014

51 Analysis: Follow the same procedure. Analyze->Fit Y by X. RCBD Example: Analysis JMP LISA: DOEFall 2014

52 RCBD Example: Visual Analysis LISA: DOEFall 2014 Boxplot: From this graphical analysis we suspect: 1. Extrusion pressure affects the response. 2. Higher pressure settings seem to result in decreased no flicks percentages. 3. These results can be potentially affected by the resin batch.

53 According to this analysis, we reject the null hypothesis. This means that there is a significant effect by the treatments. Software is going to give you a p-value for Block, but only use this to gauge how much we reduced experimental error. Do not test the blocks using this p-value. RCBD Example: ANOVA Table LISA: DOEFall 2014

54 Significant differences between treatments 1 and 4, and 2 and 4. RCBD Example: Contrasts LISA: DOEFall 2014

55 Common Designs: LISA: DOEFall 2014 Introduction to Factorial Designs

56 In this type of design we want to study the effect of two or more factors. Here, we have that in each complete trial or replication of the experiment, all possible combinations of the levels of the factors are investigated. Basic idea: Treatments are a combination of multiple factors with different levels (i.e. settings) Factorial Designs LISA: DOEFall 2014

57 The effect of factor is defined to be as the change in the response produced by a change in the level of the factor (main effect). Interaction between factors is present when the difference in response between the levels of one factor is not the same at all levels of the other factors (i.e. the effect of factor A depends on the level chose for factor B). Factorial Designs: Main Concepts LISA: DOEFall 2014

58 An engineer is designing a battery that will be used in a device that will be subject to extreme variations in temperature. She is interested in examining three different materials for this battery at three different temperatures (15, 70 and 125 °F) in order to determine how battery life is affected by these conditions. * Example from Montgomery (2009) Factorial Designs Example: Battery Design LISA: DOEFall 2014

59 Response: Battery life Treatment: All combinations the factors: Material: 3 levels (1, 2 and 3) Temperature: 3 levels (15, 70 and 125 °F) Factorial Design Example: Study LISA: DOEFall 2014

60 Replication Each treatment (combination of levels of factors) is replicated 4 times. Local control of error In order to minimize variability we will keep everything else in the testing lab constant throughout the experiment. Factorial Design Example: Principles of DOE LISA: DOEFall 2014

61 Factorial Design Example: Randomization LISA: DOEFall 2014 MatTempRun MatTempRun MatTempRun

62 Factorial Design Example: Randomization LISA: DOEFall 2014 You can create your own design in JMP: DOE->Custom Design

63 Factorial Design Example: Analysis LISA: DOEFall 2014 Analyze->Fit Model

64 Red Triangle: Factor Profiling -> Interaction Plots Factorial Design Example: Interaction LISA: DOEFall 2014

65 Factorial Design Example: ANOVA Theory LISA: DOEFall 2014 Here the ANOVA table is partitioned: SST= SSModel+SSError And SSModel is partitioned: SSModel=SSTemp+SSMat+SSInt SSTemp: Compares Temperature level means to overall mean. SSMat: Compares Material level means to overall mean. SSInt: Looks at differences between temperature changes depending on material.

66 Factorial Design Example: ANOVA LISA: DOEFall 2014

67 Model adequacy checking LISA: DOEFall 2014

68 Model Adequacy checking LISA: DOEFall 2014 It is recommended to check the adequacy of the model by examining the residuals (difference between the true values and the ones predicted by the model. These residuals should be structureless, which means they should not contain an obvious pattern. To save the residuals from Fit Model (Not fit Y by X): Red triangle: Save columns -> Residuals

69 Model Adequacy checking: Assumptions LISA: DOEFall 2014 Residuals should be normally distributed Can inspect with a normal probability plot: Analyze-> Distribution. Red triangle: Normal Quantile plot Plot Residuals vs fitted values and check for patterns In the effect analysis window, red triangle: Row diagnostics Plot Residuals by treatment, can do it with saved residuals using the graph builder.

70 Model Adequacy checking: Battery LISA: DOEFall 2014

71 Model Adequacy checking: Plasma Etching LISA: DOEFall 2014

72 Model Adequacy checking: Vascular Graft LISA: DOEFall 2014

73 Exercise LISA: DOEFall 2014

74 A soft drink bottler is interested in obtaining more uniform fill heights in the bottles produced by his manufacturing process. The process engineer can control three variables during the filling process: Percent carbonation Operating pressure in the filler Line speed. The engineer can control carbonation at three different levels (10, 12 and 14%), two levels for pressure (25 and 30 psi) and two levels for line speed (200 and 250 bpm). She designs to run two replicates of a factorial design in these factors, with all runs taken in random order. The response variable is the average deviation from the target fill height observed in a production run of bottles at each set of conditions. How many factors do we have? How many runs would we need to perform? * Example from Montgomery (2009) Exercise: LISA: DOEFall 2014

75 Suppose you obtain this interaction plot, what would you interpret? Exercise: Question 1 LISA: DOEFall 2014

76 Conduct the factorial analysis in JMP, what can you conclude? Exercise: Analysis LISA: DOEFall 2014

77 What can you say about the residuals? Exercise: Analysis LISA: DOEFall 2014

78 Summary LISA: DOEFall 2014 Remember to randomize! – Randomize run order, and treatments Remember to replicate! – Use multiple EUs for each treatment– it will help you be more accurate in estimating your effects Remember to block! – In the case where you suspect some inherent quality of your experimental units may be causing variation in your response, arrange your experimental units into groups based on similarity in that quality Remember to contact LISA! – For short questions, attend our Walk-in Consulting hours – For research, come before you collect your data for design help

79 Reference LISA: DOEFall 2014 Montgomery, Douglas C. Design and analysis of experiments. John Wiley & Sons, 2008.

80 LISA: DOEFall 2014 Please don’t forget to fill the sign in sheet and to complete the survey that will be sent to you by . Thank you!


Download ppt "LISA Short Course Series Basics of Design of Experiments Ana Maria Ortega-Villa Fall 2014 LISA: DOEFall 2014."

Similar presentations


Ads by Google