Yaomin Jin 01-03-2002 Design of Experiments ------Morris Method.

Slides:



Advertisements
Similar presentations
Chapter 4 Systems of Linear Equations; Matrices
Advertisements

Chapter 4 Systems of Linear Equations; Matrices
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Uncertainty in fall time surrogate Prediction variance vs. data sensitivity – Non-uniform noise – Example Uncertainty in fall time data Bootstrapping.
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
Experimental Design, Response Surface Analysis, and Optimization
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Regression Analysis Using Excel. Econometrics Econometrics is simply the statistical analysis of economic phenomena Here, we just summarize some of the.
Chapter 2 Matrices Finite Mathematics & Its Applications, 11/e by Goldstein/Schneider/Siegel Copyright © 2014 Pearson Education, Inc.
The Two Factor ANOVA © 2010 Pearson Prentice Hall. All rights reserved.
By : L. Pour Mohammad Bagher Author : Vladimir N. Vapnik
Stochastic Differentiation Lecture 3 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
ANOVA Determining Which Means Differ in Single Factor Models Determining Which Means Differ in Single Factor Models.
Probability Distributions
1/55 EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2008 Chapter 10 Hypothesis Testing.
Preference Analysis Joachim Giesen and Eva Schuberth May 24, 2006.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Session 6: Introduction to cryptanalysis part 1. Contents Problem definition Symmetric systems cryptanalysis Particularities of block ciphers cryptanalysis.
Introduction to Probability and Statistics Linear Regression and Correlation.
QMS 6351 Statistics and Research Methods Chapter 7 Sampling and Sampling Distributions Prof. Vera Adamchik.
Basics of regression analysis
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
1 A MONTE CARLO EXPERIMENT In the previous slideshow, we saw that the error term is responsible for the variations of b 2 around its fixed component 
Lecture II-2: Probability Review
Lecture 8: Cascaded Linear Transformations Row and Column Selection Permutation Matrices Matrix Transpose Sections 2.2.3, 2.3.
Simple Linear Regression. Introduction In Chapters 17 to 19, we examine the relationship between interval variables via a mathematical equation. The motivation.
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Matrix Definition A Matrix is an ordered set of numbers, variables or parameters. An example of a matrix can be represented by: The matrix is an ordered.
Separate multivariate observations
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Chapter 13: Inference in Regression
Chapter 10 Hypothesis Testing
1 Chapter 6 – Determinant Outline 6.1 Introduction to Determinants 6.2 Properties of the Determinant 6.3 Geometrical Interpretations of the Determinant;
Presentation by: H. Sarper
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
1 Introduction to Estimation Chapter Concepts of Estimation The objective of estimation is to determine the value of a population parameter on the.
14 Elements of Nonparametric Statistics
Simplex method (algebraic interpretation)
Order Statistics The ith order statistic in a set of n elements is the ith smallest element The minimum is thus the 1st order statistic The maximum is.
Modern Navigation Thomas Herring
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
1 Chapter 7 Sampling Distributions. 2 Chapter Outline  Selecting A Sample  Point Estimation  Introduction to Sampling Distributions  Sampling Distribution.
Chapter 13 Multiple Regression
The two way frequency table The  2 statistic Techniques for examining dependence amongst two categorical variables.
CHAPTER 5 SIGNAL SPACE ANALYSIS
Slide Slide 1 Section 8-4 Testing a Claim About a Mean:  Known.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Chapter 8: Simple Linear Regression Yang Zhenlin.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 15-1 Chapter 15 Multiple Regression Model Building Basic Business Statistics 10 th Edition.
FORECASTING METHODS OF NON- STATIONARY STOCHASTIC PROCESSES THAT USE EXTERNAL CRITERIA Igor V. Kononenko, Anton N. Repin National Technical University.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Arrays Department of Computer Science. C provides a derived data type known as ARRAYS that is used when large amounts of data has to be processed. “ an.
1 Estimation Chapter Introduction Statistical inference is the process by which we acquire information about populations from samples. There are.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Linear Algebra Review.
Modeling and Simulation Dr. Mohammad Kilani
(5) Notes on the Least Squares Estimate
Random Testing: Theoretical Results and Practical Implications IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 2012 Andrea Arcuri, Member, IEEE, Muhammad.
Comparing Three or More Means
CH 5: Multivariate Methods
Analyzing Redistribution Matrix with Wavelet
Fitting Curve Models to Edges
Chapter 1 Systems of Linear Equations and Matrices
Generally Discriminant Analysis
Chapter 4 Systems of Linear Equations; Matrices
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Marios Mattheakis and Pavlos Protopapas
Presentation transcript:

Yaomin Jin Design of Experiments Morris Method

Outline of the presentation Introduction of screening technique Morris method Examples Conclusions

Screening technique large-scale models requirement of considerable computer time for each run depend on a large number of input variables Input factors x 1 x 2 x 3 … x k Model Output y=f(x 1,x 2,…,x k ) Which factor is important?

Morris method(1991) OAT(one factor at a time) the baseline changes at each step wanders in the input factors space Estimate the main effect of a factor by computing r number of local measures at x 1,x 2,…x r in the input space then take average.

Elementary effects Reduce the dependence on the specific point that a local experiment has. Determine which factor have: negligible effects linear and additive effects non-linear and interaction effects

Elementary effects k dimensional factors vector x for the simulation model has components x i that have p-values in the set {0, 1/(p-1),…,1} The region of the experiment is a k dimensional p level grid. In practical applications, the values sampled in are subsequently rescaled to generate the actual values of the simulation factors. Δ=1/(p-1).

Elementary effects of i-th factor at given point x where x is any value in selected such that the perturb point is still in. A finite distribution F i of elementary effects for the i-th input factor is obtained by sampling x from. The number of elements of each F i is

Economy of Morris method In the simplest form, the total computational effect required for a random sample of r values from each distribution F i is n=2rk runs. Each elementary effect requires the evaluation of y twice. The simplest form of Morris design has an economy rk/2rk=1/2.

Based on the construction of a matrix B* with rows that represent input vectors x, for which the corresponding experiment provides k elementary effects (one for each input factor) from k+1 runs. Economy of the design is k/(k+1). assume that p is even and, each of the elementary effects for the i-th input factor has equal probability of being selected. The key idea is: Base value x* is randomly chosen from the vector x, each component x i being sampled from the set One or more of the k components of x* are increased by such that vector x (1) still in Economical design

The estimated elementary effect of the i-th component of x (1) (if the i-th component of x (1) has been changed by ) if x (1) increased by Δ; if x (1) decreased by Δ. Let x (2) be the new vector, select a third vector x (3) such that differs from x (2) only one component j: Economical design (continue)

Repeat the upper step get the k+1 input vectors x (1),x (2),…,x (k+1), any component i of x* is selected at least once to be increased by. To estimate one elementary effect for each factor. else

Economical design (continue) The rows of orientation matrix B* are the vectors describe above. This provides a single elementary effect per factor. To build B*, Restrict attention: a. p is even; b. Firstly, selection of sampling matrix B with elements that are 0 or 1, such that every column there are two rows of that differ in only one element.

In particular, B may be chosen to be a strictly lower triangular matrix of 1, consider B’ given by Economical design (continue)

J k+1,1 is a matrix of 1. as a design matrix(i.e. each row a value for x) x* is randomly chosen base value of x. B’ could be used as a design matrix. Each element is randomly assigned a value from with equal probability. Since it would provide k elementary effects; one effect each input factor, with a computation cost of k+1 runs. However the problem is that the k elementary effects B’ produces would not be randomly selected. Economical design (continue)

A randomised version of the design matrix is given by where D* is diagonal matrix in which each diagonal element is either +1 or -1 P* is random permutation matrix, in which each column contains one element equal to 1 and all the others equal to 0, and no two columns have 1’s in the same position B* provides one elementary effect per factor that is randomly selected.

Suppose that p=4, k=4 and, that is, four factors that may have values in the set {0,1/3,2/3, 1}. Then B 5*4 is given by Example

and the randomly generated x*, D* and P* happen to be Example (continue)

To estimate the mean and variance of the distribution F i (i=1,…,k), take a random sample of r elements; that is sample r mutually independent orientation matrices. Since each orientation matrix provides one elementary effect for every factor, the r matrices together provide r × k dimensional samples, one for each F i (i=1,…,k). We use the classic estimate for every factor’s mean and standard deviation. Elementary effects

The characterization of the distribution F i through its mean and standard deviation gives useful information about the influence on the output; a high mean indicates a factor with an important overall influence on the output, a high standard deviation indicates either a factor interacting with other factors or a factor whose effect is non-linear.

The lines constituting a wedge, are described by ; where is the standard deviation of the mean elementary effect. If the parameter has coordinates below the wedge, i.e., this is a strong indication that the mean elementary effect of the parameter is non- zero. A location of the parameter coordinates above the wedge indicates that interaction effects with other parameters or non-linear effects are dominant. Standard of importance

Example 1

Result of example 2 ( p=4, Δ=2/3 and r=4)

Example 2 where w i =2(x i -0.5) except for i=3, w i =2(1.1x i /(x i +0.1)-0.5), otherwise. Coefficients of relatively large value were assigned as

Input 1-5 are clearly separated from the cluster of remaining outputs, which have means and standard deviations close to 0. In particular, inputs 4,5 have mean elementary effects that are substantially different from 0 while having small deviations. Consider both means and standard deviations together, we conclude that the first 5 inputs are important, and that of these the first three appear to have effects that involve either curvature or interactions. This is coincide with the model. Result of example 2 (from graph)

Conclusions Economical for models with a large numbers of parameters Does not dependent on any assumptions about the relationship between parameters and outputs Results are easily explained in a graph Drawback is not consider the dependencies between parameters(such as interactions)