Download presentation

Presentation is loading. Please wait.

1
**Design of Experiments Lecture I**

2
**Topics Today Review of Error Analysis**

Theory & Experimentation in Engineering Some Considerations in Planning Experiments Review of Statistical formulas and theory Begin Statistical Design of experiments (“DOE” or “DOX”)

3
**Part 1: Review of Error Analysis**

Uncertainty or “random error” is inherent in all measurements Statistical basis Unavoidable- seek to estimate and take into account Can minimize with better instruments, measurement techniques, etc.

4
**Review of Error Analysis**

Systematic errors (or “method errors”) are mistakes in assumptions, techniques etc. that lead to non-random bias Careful experimental planning and execution can minimize Difficult to characterize; can only look at evidence after the fact, troubleshoot process to find source and eliminate

5
**Graphical Description of Random and Systematic Error**

6
**Why do we need to estimate uncertainty and include in stated experimental values?**

Probability of being wrong will influence process and/or financial decisions Cost / benefit of accepting result as “fact”? What would be the effect downstream as the uncertainty propagates through the process? When comparing two values and determining if they are different Overlap of uncertainty? What is the probability that the difference is significant?

7
**Stating Results +/- Uncertainty**

Rule for Stating Uncertainties Experimental uncertainties should almost always be rounded to one significant figure. Rule for Stating Answers The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty. Express Uncertainty as error bars and confidence interval for graphical data and curve fits (regressions) respectively

8
**Determining Propagated Error: Non-statistical Method**

Compute from total differential

9
Propagated error OR Can do sensitivity analysis in spreadsheet of other software program Compute possible uncertainty in calculated result based on varying values of inputs according to the uncertainty of each input Example: Use “Solver” optimization tool in Excel to find maximum and minimum values of computed value in a cell by varying the value of each input cell Set constraint that the input values lie in the range of uncertainty of that value

10
Or Can Use repeat measurements to estimate uncertainty in a result using probability and statistics for random errors: mean standard deviation of each measurement standard deviation of the mean of the measurements Confidence intervals on dependant variable Confidence intervals on regression parameters 1)Mean all familiar with- average or “best” value 2)std. dev. Is a good meas. Of variation in repeat meas.- tells how far apart individual measurements are from each other. 3) SDOM gives estimate of how much ind. Measurements vary form the mean value. 4) one std. dev defines a 66% CI. . ..if you take more meas., then 66% should lie within 1 std. dev.

11
**Statistical Formulas from chapter 4 of Taylor**

12
**Relationship of standard deviation to confidence intervals**

13
**Confidence intervals on non-linear regression coefficients**

Can be complex- use software but understand theory of how calculated for linear case

14
**Error bars that represent uncertainty in the dependant variable**

15
**How measurements at a given x,y would be distributed for multiple measurements**

16
**Determining Slope and Intercept In Linear Regression**

17
**Confidence intervals (SD) on slope B and Intercept A**

18
**Regression Output in Excel**

Simple ANOVA- we will be looking at more complex cases in DOE Slope and intercept Confidence limits (+/-) om slope & intercept

19
**Confidence Intervals in TableCurve**

20
**Confidence Intervals in TableCurve**

21
**Regression in Polymath**

22
**Statistical Process Control**

Very Widely Used Used for quality control and in conjunction with DOE for process improvement Control Charts provide statistical evidence That a process is behaving normally or if something wrong Serve as data output (dependant variable )from process in designed statistical experiments

23
**Variation from expected behavior in control charts- similar to regression and point statistics**

Control Limit is the mean of a well behaved process output (i.e. product) Upper and lower Control Limits represent confidence limit on mean of “well behaved” process ouptut Expect random deviations form mean just like in regression

24
**Part 2: Theory and Experimentation in Engineering**

25
**Theory and Experimentation in Engineering**

Two fundamental approaches to problem solving problems in the discovery of knowledge: Theoretical (physical/mathematical modeling) Experimental measurement (Most often a combination is used) Each application is different But in General: -math models: ALWAYS only an approximation of the “real world” Good? Fair? Poor? -Engineering as opposed to pure science: more money & time constraints Theoretical: apply to whole class of problems Experimental; more specific to application at hand (Note example of correlations based on theoretical development- ie. Correlated data based on dimensionless groups, etc.)

26
**Example of combination of theory and experimentation to get semi-empirical correlation**

Here, the researchers have correlated data of several experimenters: 1) Have used similarity principles / Dimensional Analysis ( Buckingham Pi method) to reduce the number of variables -theory, reason to determine the variables that the phenomena or process depends on - Construct dimensionless “Pi variables 2) This process is somewhat complex: heat transfer in a fluidized bed (recall you should have done fluid part- model as many straight tubes, find a type of friction factor from model= Ergun Eqn, Then too the theoretical model needed to be curve fit to data to get coefficeints- LOOK UP) -Did regression analysis of log transformed data to find exponents on dimensionless groups

27
**Features of alternative methods**

Theoretical Models Simplifying assumptions needed General results Less facilities usually needed Can start study immediately Experimental approach Study the “real world”-no simplifying assumptions needed Results specific to apparatus studied High accuracy measurements need complex instruments Extensive lab facilities maybe needed Time delays from building apparatus, debugging

28
**Functional Types of Engineering Experiments**

Determine material properties Determine component or system performance indices Evaluate/improve theoretical models Product/process improvement by testing Exploratory experimentation Acceptance testing Teaching/learning through experimentation Mixing a chem E example, or polymer properties, etc. Specifications for product: ie if you buy a stereo: signal/noise, frequency response, total harmonic distortion, etc Self ex. **VERY IMPORTANT- most applicable to what you might do ie, what research faculty, grad students do Large of Expensive pieces of equipment- before you pay, test it out (do your tests meet performance indices in #2?) What we do in this class. . .

29
**Some important classes of Experiments**

Estimation of parameter mean value Estimate of parameter variability Comparison of mean values Comparison of variability Modeling the dependence of dependant Variable on several quantitative and/or qualitative variables

30
**Practical Experimental Planning**

Experimental design: Consider goals Consider what data can be collected. Difficulty of obtaining data What data is most important What measurements can be ignored Type of data: Categorical? Quantitative? Test to make sure that measurements/apparatus are reliable Collect data carefully and document fully in ink using bound notebooks. Make copies and keep separately PLANNING SAVES YOU TIME!!!

31
**Preview of Uses for DOE Lab experiments for research**

Industrial process experiments -first is similar to what we have been doing -second is likely more relevant to what you may do in future

32
**Four engineering problem classes to which DOE is applied in manufacturing**

1. Comparison 2. Screening/ characterization 3. Modeling 4. Optimization -will talk briefly about each and try to present examples Ie two means the same for tow diff. processes? Which factors important: ID the significant few from the many insignificant- complex manufacturing, etc. Model to predict output given values of significant few inputs (response = dependant variable) Make process give best product possible, or least expensive process, combination of the two, etc.

33
Comparison Compares to see if a change in a single “factor” (variable) has resulted in a process change (ideally an improvement) Ties in with what we discussed about SPC. Often SPC would be used as output data from a process to tell us whether the product or intermediate is better, etc.

34
**Screening/Characterization**

Used when you want to see the effect of a whole range of factors so as to know which one(s) are most important. Two factorial experiments usually used Recall when we talked about SPC we said that it was important to “understand your process”

35
Modeling Used when you want to be able to construct a mathematical model that will predict the effect on a process of manipulating a variables or multiple variables -example would be our Kla experiment -continuous variables where in many process applications we are often talking about discreet of even binary factors

36
Optimization When you want to determine the optimal settings for all factors to give an optimal process response. For example in an industrial process, the optimal response might be the best product spec or the least cost to manufacture, or some optimal combination of those 2 responses, etc.

Similar presentations

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google