# A Sequential Methodology for Integrated Physical and Simulation Experiments Daniele Romano Dept. of Mechanical Engineering, University of Cagliari Piazza.

## Presentation on theme: "A Sequential Methodology for Integrated Physical and Simulation Experiments Daniele Romano Dept. of Mechanical Engineering, University of Cagliari Piazza."— Presentation transcript:

A Sequential Methodology for Integrated Physical and Simulation Experiments Daniele Romano Dept. of Mechanical Engineering, University of Cagliari Piazza d’Armi, Cagliari, Italy e-mail: romano@dimeca.unica.it DEMA 2008 Workshop Cambridge, 11-15 August 2008 Isaac Newton Institute for Mathematical Sciences joint with Alessandra Giovagnoli

The problem Design or improve a physical system by combined use of physical and simulation experiments Schematisation Two-treatment sequential experiment T0: physical experiment T1: simulation experiment T1  T0  T1  T1  … …  T0  Stop Stopping rule is an essential part of the approach Additional task: design of the experiments at each stage (i.e. choice of doses in clinical trials) Assumptions Physical observations are more reliable (closer to reality) Simulation runs cost less

Questions 1.Is this problem relevant to applications? 2.Has it already been investigated? Partially in “Calibration of computer models” but the objective is different and sometimes field data are not designed (Kennedy and O’Hagan, 2001, Bayarri et al., 2007). Calibration could be part of the method.

Analogies with other statistical problems 1.George Box’s sequential experimentation (Box and Wilson, 1957). However, there are no simulation experiments in that methodology and experiments are decided based mainly on expert judgment. 2.Sample surveys by questionnaires. Information can be obtained directly or by proxy, and a main question is how many resources to allocate to direct observations and how many to proxy ones. We are not aware however of a sequential approach. 3.Computer models with different level of accuracy (Qian et al., 2004) 4.Sequential experiments in clinical trials

Two motivating applications Design of a robotic device (Manuello et al., 2003) Improvement of a manufacturing process (Masala et al. 2008) Sequence of experiments is based on judgement

Climbing robot 21 factors investigated simulation model developed in Working Model

88%12% Computer model modification Extensive exploration Feasibility check on the prototype Optimization Confirmation Exploration of the feasible region Note the efficient allotment of experimental effort

The robot can climb steadily with speed seven times higher than in the initial design configuration and virtually on any post surface (robustness)  better design We just built one physical prototype instead of tens to investigate on 21 factors  cost saving Computer exploration makes us discover that the robot can descend passively, by using gravity  innovation The comparison of physical vs numerical gave the designer the idea of how to modify the code improving the model  simulation model improved Benefits

fall no move fall in control climb steadily 0 1 2 -80 -60 -40 -20 0 20 40 tempo [s] allungamento [mm] evo 3 innovation climb and then fall operating modes of the robot

Improvement of the flocking process thread flock Car component covered by flock fabric Flock yarns Two simulation models developed: one for the electric field inside the chamber (FEMLAB), one for the motion of the flock (MATLAB). 9 factors investigated

63%37% Simulation experimentsExpert reasoning 1 2 3 4 5 6 7 12 9 10 13 11 Physical experiments 9 runs 11 runs 144 runs Simulation runs: 153 (63%)Physical runs: 90 (37%) Electric field simulator Pilot plant 22 runs Pilot plant Production line 22 runs 8 35 runs Lab Electric field simulator + flock motion simulator

Operating conditions considered potentially unsafe have been tried on the simulator, obtaining golden information for improving the process. These conditions would never have been tried in the field.  process efficiency increases Increased process efficiency can be exploited to raise productivity (up to 50%) (  process improvement) or to produce yarns with new characteristics  product innovation Results from physical and simulation experiments were used for tuning some unknown parameters of the simulator  computer model calibration A mechanistic model of the whole process was developed by combining the simulation models with the results of a physical experiment (determining the rate of lifted flock).  new process design tool Benefits

Response models Reality Physical trials Simulation are taken as response surfaces and are estimated by polynomial regression over the region of interest and   (0,  2 ) with independent errors, b(x) is the bias function, estimated by Locate a sufficiently high value of the true response over the domain D by using simulation as much as possible, provided that simulation is found reliable enough. Objective x  D (a hyper-rectangle)

Sequential procedure  k =0: the experiment is physical  P k  k =1: the experiment is numerical  S k At each step k of the procedure we must decide on:  the type of the experiment  the region where the experiment is run, R k  the runs size, n k  the design,  k We make simple choices on the type of the region and the design throughout:  R k is a hypercube of fixed size (centre C k )  k is a Latin Hypercube Design

Rationale of the procedure We want to use a physical experiment only in two particular situations: 1.A satisfactory response level has been found by simulation and it is worth checking it 2.There is the need to update the measure of the unreliability of simulation in order to check if it is worth going on or stopping We want to stop the procedure in two particular situations: 1.A satisfactory response level has been found by simulation and it has been proven in the physical set-up  SUCCESS 2.The simulator is found too unreliable after a check by a physical experiment In all other circumstances we will use simulation experiments A physical experiment is always run in the region of the preceding simulation experiment  k =0: R k = R k-1

SkSk S k+1 P k+1 S1S1 P2P2 PkPk S k+1 Stop Allowed transitions START: SkSk Stop PkPk P k+1

Performance measures at stage k Satisfaction Expected improvement in next simulation experiment Total cost Increment in satisfaction wrt the last experiment of the same kind D SAT  k) =  SAT  k) -  SAT  k-l*) gradient RkRk frontier

Unreliability of the simulation (after P k ) : error variance estimated at step k m k : number of regions where both kind of experiments were done up to step k d : length of the hypercube edge continued Unreliability of the simulation (after S k )

SkSk S k+1 PkPk PkPk Stop  SAT (k)>s C or >u C otherwise or (r 1,r 2,r 3,r 4 ) = 1 How are transitions ruled? otherwise Satisfaction (after a physical experiment) is high Simulation is too unreliable r 1 :  SAT (k)>s C r 2 :  UNREL (k)>u C r 3 : r 4 : Too many stages without any actual improvement Allowable cost exceeded

k=1 S1S1 k=k+1 PkPk Check stop N Y STOP k=k+1 SkSk Check S->P N Y High-level flow diagram of the procedure Only high level decision is made explicit here

Block S k or P k Select R k, n k,  k Run  k Estimate y(x) Compute performance measures  SAT (k), D SAT (k),  UNREL (k),  IMPR (k), c(k)

Low-level decisions: select region, run size and design  k =0: R k = R k-1  k =1: R k ≠ R k-1  IMPR (k)>0 N Y Compute C k (R k adjacent to R k-1 ) Draw C k at random Region

A B D R 1 =R 2 R3R3 R4R4 R5R5 R6R6 C 1 = C 2 C3C3 C4C4 C5C5 C6C6 sample space x1x1 x2x2

n k =  n k-1,  = c S /c P, 0<  <1  k =1: Run size Run size of each physical experiment is such that it costs as much as the simulation experiment preceding it Run size of simulation experiments is proportional to the expected increase of the response (if any) at the centre, C k, of the next region, i.e., Parameter h can be tuned by setting the willingness to pay for obtaining an improvement  y, for example When region R k is drawn at random, we put n k = n 1  k =0: P k  k =1: S k

A B D R 1 =R 2 R3R3 R4R4 R5R5 R6R6 C 1 =C 2 C3C3 C4C4 C5C5 C6C6 sample space x1x1 x2x2 + + + + ++ + + ++ + + + + + + + + + + + + + + + + + + +

Demonstrative case A computer code for implementing the procedure has been developed in Matlab simulation reality

case 1 reality * simulation experiment prediction * physical experiment prediction R1= R2R1= R2 R3R3 R4= R5R4= R5 sequence of experiments: S 1  P 2  S 3  S 4  P 5  Stop active stopping rule:  UNREL (k)>u C 2

case 2 reality * simulation experiment prediction * physical experiment prediction R1= R2R1= R2 R3= R4R3= R4 sequence of experiments: S 1  P 2  S 3  P 4  Stop active stopping rule:  SAT (4)>u S 3

R1= R2R1= R2 R3R3 R 4 = R 5 reality * simulation experiment prediction * physical experiment prediction case 3 sequence of experiments: S 1  P 2  S 3  S 4  P 5  Stop active stopping rule:  SAT (5)>u S simulation = reality 2

R1= R2R1= R2 R3R3 R4 R4 R5 R5 R 6 = R 7 reality * simulation experiment prediction * physical experiment prediction case 4 sequence of experiments: S 1  P 2  S 3  S 4  S 5  S 6  P 7  Stop active stopping rule:  SAT (7)>u S simulation = reality 2

Conclusions The scope of the approach is wide. In general, it is apt to deal with any situation where the response can be measured by two (or more) instruments realizing a different quality-cost trade-off. The method is aimed at performance optimisation (maximisation of a distance measure in the output space). However, the basic sequential mechanism can be applied to different goals. Testing and validation in real applications is needed.

“The reader should notice the degree to which informed human judgement decides the final outcome”. George Box, commenting on the sequential experimentation method: Box, G. P. E., Hunter, W. G., Hunter, J. S. (1978): Statistics for experimenters, p. 537. human judgement or automation? Shall we ask Newton?

Kennedy, M.C., O’Hagan, A.: Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B, 63 Part 3, 425-464 (2001) Qian, Z., Seepersad, C.C., Joseph, V.R., Allen, J.K., Wu, C.F.J.: Building Surrogate Models Based on Detailed and Approximate Simulations. ASME 30th Conf. of Design Automation, Salt Lake City, USA. Chen, W. (Ed.), ASME Paper no. DETC2004/DAC-57486 (2004) Bayarri, M.J., Berger, J.O., Paulo, R., Sacks, J., Cafeo, J.A., Cavendish, J., Lin, C.-H., Tu, J.: A Framework for Validation of Computer Models, Technometrics, 49(2), 138-154 (2007) Manuello, A., Romano, D., Ruggiu, M.: Development of a Pneumatic Climbing Robot by Computer Experiments. 12th Int. Workshop on Robotics in Alpe-Adria-Danube Region, Cassino, Italy. Ceccarelli, M. (Ed.), available on CD Rom (2003) Masala, S., Pedone, P., Sandigliano, M. and Romano, D.: Improvement of a manufacturing process by integrated physical and simulation experiments: a case-study in the textile industry. Quality and Reliability Engineering Int., to appear References Box, G.E.P., Wilson, K.B.: On the Experimental Attainment of Optimum Conditions, Journal of the Royal Statistical Society: Series B, 13, 1-45 (1951)

Download ppt "A Sequential Methodology for Integrated Physical and Simulation Experiments Daniele Romano Dept. of Mechanical Engineering, University of Cagliari Piazza."

Similar presentations