Chapter 10 Verification and Validation of Simulation Models Banks, Carson, Nelson & Nicol Discrete-Event System Simulation.

Slides:



Advertisements
Similar presentations
Verification and Validation of Simulation Models
Advertisements

Statistics Review – Part II Topics: – Hypothesis Testing – Paired Tests – Tests of variability 1.
Objectives 10.1 Simple linear regression
Session 8b Decision Models -- Prof. Juran.
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
1 Hypothesis testing. 2 A common aim in many studies is to check whether the data agree with certain predictions. These predictions are hypotheses about.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 9 Hypothesis Testing Developing Null and Alternative Hypotheses Developing Null and.
Statistical Issues in Research Planning and Evaluation
Chapter 10 Section 2 Hypothesis Tests for a Population Mean
Verification, Validation and Accreditation (VV&A) of a Simulation Model Aslı Sencer.
Chapter 10 Verification and Validation of Simulation Models
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
Evaluating Hypotheses Chapter 9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics.
Software Quality Control Methods. Introduction Quality control methods have received a world wide surge of interest within the past couple of decades.
Lecture 7 Model Development and Model Verification.
Lecture 9 Output Analysis for a Single Model. 2  Output analysis is the examination of data generated by a simulation.  Its purpose is to predict the.
1 Simulation Modeling and Analysis Verification and Validation.
1 Validation and Verification of Simulation Models.
BCOR 1020 Business Statistics Lecture 21 – April 8, 2008.
7-2 Estimating a Population Proportion
BCOR 1020 Business Statistics
1 BA 555 Practical Business Analysis Review of Statistics Confidence Interval Estimation Hypothesis Testing Linear Regression Analysis Introduction Case.
Model Calibration and Model Validation
AM Recitation 2/10/11.
Statistics 11 Hypothesis Testing Discover the relationships that exist between events/things Accomplished by: Asking questions Getting answers In accord.
Overview of Statistical Hypothesis Testing: The z-Test
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 9. Hypothesis Testing I: The Six Steps of Statistical Inference.
1 Validation & Verification Chapter VALIDATION & VERIFICATION Very Difficult Very Important Conceptually distinct, but performed simultaneously.
Chapter 10 Verification and Validation of Simulation Models
1 Performance Evaluation of Computer Networks: Part II Objectives r Simulation Modeling r Classification of Simulation Modeling r Discrete-Event Simulation.
 1  Outline  stages and topics in simulation  generation of random variates.
Verification & Validation
1 OM2, Supplementary Ch. D Simulation ©2010 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible.
1 Chapter 10: Introduction to Inference. 2 Inference Inference is the statistical process by which we use information collected from a sample to infer.
Chapter Twelve Copyright © 2006 John Wiley & Sons, Inc. Data Processing, Fundamental Data Analysis, and Statistical Testing of Differences.
Chap. 5 Building Valid, Credible, and Appropriately Detailed Simulation Models.
McGraw-Hill/Irwin © 2006 The McGraw-Hill Companies, Inc., All Rights Reserved. 1.
Statistical Inference for the Mean Objectives: (Chapter 9, DeCoursey) -To understand the terms: Null Hypothesis, Rejection Region, and Type I and II errors.
MODES-650 Advanced System Simulation Presented by Olgun Karademirci VERIFICATION AND VALIDATION OF SIMULATION MODELS.
Chapter 10 Verification and Validation of Simulation Models
Building Simulation Model In this lecture, we are interested in whether a simulation model is accurate representation of the real system. We are interested.
14 Statistical Testing of Differences and Relationships.
Chapter 10 The t Test for Two Independent Samples
1 Doing Statistics for Business Doing Statistics for Business Data, Inference, and Decision Making Marilyn K. Pelosi Theresa M. Sandifer Chapter 12 Multiple.
© Copyright McGraw-Hill 2004
Sampling and Statistical Analysis for Decision Making A. A. Elimam College of Business San Francisco State University.
STEPS IN THE DEVELOPMENT OF MODEL INPUT DATA Evaluate chosen distribution and associated parameters for goodness-of-fit Goodness-of-fit test provide helpful.
Understanding Basic Statistics Fourth Edition By Brase and Brase Prepared by: Lynn Smith Gloucester County College Chapter Nine Hypothesis Testing.
1 Probability and Statistics Confidence Intervals.
 Simulation enables the study of complex system.  Simulation is a good approach when analytic study of a system is not possible or very complex.  Informational,
Simulation. Types of simulation Discrete-event simulation – Used for modeling of a system as it evolves over time by a representation in which the state.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Building Valid, Credible & Appropriately Detailed Simulation Models
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
Chapter 10: The t Test For Two Independent Samples.
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 7 l Hypothesis Tests 7.1 Developing Null and Alternative Hypotheses 7.2 Type I & Type.
Class Six Turn In: Chapter 15: 30, 32, 38, 44, 48, 50 Chapter 17: 28, 38, 44 For Class Seven: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 Read.
Chapter Nine Hypothesis Testing.
Hypothesis Tests l Chapter 7 l 7.1 Developing Null and Alternative
More on Inference.
Understanding Results
Chapter 8: Inference for Proportions
Chapter 10 Verification and Validation of Simulation Models
CONCEPTS OF HYPOTHESIS TESTING
More on Inference.
Chapter Nine Part 1 (Sections 9.1 & 9.2) Hypothesis Testing
Statistics II: An Overview of Statistics
MECH 3550 : Simulation & Visualization
Building Valid, Credible, and Appropriately Detailed Simulation Models
Verification and Validation of Simulation Models
Presentation transcript:

Chapter 10 Verification and Validation of Simulation Models Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

2 Purpose & Overview The goal of the validation process is:  To produce a model that represents true behavior closely enough for decision-making purposes  To increase the model’s credibility to an acceptable level Validation is an integral part of model development  Verification – building the model correctly (correctly implemented with good input and structure)  Validation – building the correct model (an accurate representation of the real system) Most methods are informal subjective comparisons while a few are formal statistical procedures

3 Modeling-Building, Verification & Validation

4 Verification Purpose: ensure the conceptual model is reflected accurately in the computerized representation. Many common-sense suggestions, for example:  Have someone else check the model.  Make a flow diagram that includes each logically possible action a system can take when an event occurs.  Closely examine the model output for reasonableness under a variety of input parameter settings. (Often overlooked!)  Print the input parameters at the end of the simulation, make sure they have not been changed inadvertently.

5 Examination of Model Output for Reasonableness [Verification] Example: A model of a complex network of queues consisting many service centers.  Response time is the primary interest, however, it is important to collect and print out many statistics in addition to response time. Two statistics that give a quick indication of model reasonableness are current contents and total counts, for example:  If the current content grows in a more or less linear fashion as the simulation run time increases, it is likely that a queue is unstable  If the total count for some subsystem is zero, indicates no items entered that subsystem, a highly suspect occurrence  If the total and current count are equal to one, can indicate that an entity has captured a resource but never freed that resource. Compute certain long-run measures of performance, e.g. compute the long-run server utilization and compare to simulation results

6 Other Important Tools [Verification] Documentation  A means of clarifying the logic of a model and verifying its completeness Use of a trace  A detailed printout of the state of the simulation model over time.

7 Calibration and Validation Validation: the overall process of comparing the model and its behavior to the real system. Calibration: the iterative process of comparing the model to the real system and making adjustments.

8 Calibration and Validation No model is ever a perfect representation of the system  The modeler must weigh the possible, but not guaranteed, increase in model accuracy versus the cost of increased validation effort. Three-step approach:  Build a model that has high face validity.  Validate model assumptions.  Compare the model input-output transformations with the real system’s data.

9 High Face Validity [Calibration & Validation] Ensure a high degree of realism: Potential users should be involved in model construction (from its conceptualization to its implementation). Sensitivity analysis can also be used to check a model’s face validity.  Example: In most queueing systems, if the arrival rate of customers were to increase, it would be expected that server utilization, queue length and delays would tend to increase.

10 Validate Model Assumptions [Calibration & Validation] General classes of model assumptions:  Structural assumptions: how the system operates.  Data assumptions: reliability of data and its statistical analysis. Bank example: customer queueing and service facility in a bank.  Structural assumptions, e.g., customer waiting in one line versus many lines, served FCFS versus priority.  Data assumptions, e.g., interarrival time of customers, service times for commercial accounts. Verify data reliability with bank managers. Test correlation and goodness of fit for data (see Chapter 9 for more details).

11 Validate Model Assumptions [Calibration & Validation] General classes of model assumptions:  Structural assumptions: how the system operates.  Data assumptions: reliability of data and its statistical analysis. Bank example: customer queueing and service facility in a bank.  Structural assumptions, e.g., customer waiting in one line versus many lines, served FCFS verses priority.  Data assumptions, e.g., interarrival time of customers, service times for commercial accounts. Verify data reliability with bank managers. Test correlation and goodness of fit for data (see Chapter 9 for more details).

12 Validate Input-Output Transformation [Calibration & Validation] Goal: Validate the model’s ability to predict future behavior  The only objective test of the model.  The structure of the model should be accurate enough to make good predictions for the range of input data sets of interest. One possible approach: use historical data that have been reserved for validation purposes only. Criteria: use the main responses of interest.

13 Bank Example [Validate I-O Transformation] Example: One drive-in window serviced by one teller, only one or two transactions are allowed.  Data collection: 90 customers during 11 am to 1 pm. Observed service times {S i, i = 1,2, …, 90}. Observed interarrival times {A i, i = 1,2, …, 90}.  Data analysis let to the conclusion that: Interarrival times: exponentially distributed with rate = 45 Service times: N(1.1, )

14 The Black Box [Bank Example: Validate I-O Transformation] A model was developed in close consultation with bank management and employees Model assumptions were validated Resulting model is now viewed as a “black box”: Input Variables Possion arrivals = 45/hr: X 11, X 12, … Services times, N(D 2, 0.22): X 21, X 22, … D 1 = 1 (one teller) D 2 = 1.1 min (mean service time) D 3 = 1 (one line) Uncontrolled variables, X Controlled Decision variables, D Model Output Variables, Y Primary interest: Y 1 = teller’s utilization Y 2 = average delay Y 3 = maximum line length Secondary interest: Y 4 = observed arrival rate Y 5 = average service time Y 6 = sample std. dev. of service times Y 7 = average length of time Model “black box” f(X,D) = Y

15 Comparison with Real System Data [Bank Example: Validate I-O Transformation] Real system data are necessary for validation.  System responses should have been collected during the same time period (from 11am to 1pm on the same Friday.) Compare the average delay from the model Y 2 with the actual delay Z 2 :  Average delay observed, Z 2 = 4.3 minutes, consider this to be the true mean value  0 = 4.3.  When the model is run with generated random variates X 1n and X 2n, Y 2 should be close to Z 2.  Six statistically independent replications of the model, each of 2- hour duration, are run.

16 Hypothesis Testing [Bank Example: Validate I-O Transformation] Compare the average delay from the model Y 2 with the actual delay Z 2 (continued):  Null hypothesis testing: evaluate whether the simulation and the real system are the same (w.r.t. output measures): If H 0 is not rejected, then, there is no reason to consider the model invalid If H 0 is rejected, the current version of the model is rejected, and the modeler needs to improve the model

17 Hypothesis Testing [Bank Example: Validate I-O Transformation]  Conduct the t test: Chose level of significance (  = 0.5) and sample size (n = 6), see result in Table Compute the same mean and sample standard deviation over the n replications: Compute test statistics: Hence, reject H 0. Conclude that the model is inadequate. Check: the assumptions justifying a t test, that the observations (Y 2i ) are normally and independently distributed.

18 Hypothesis Testing [Bank Example: Validate I-O Transformation] Similarly, compare the model output with the observed output for other measures: Y 4  Z 4, Y 5  Z 5, and Y 6  Z 6

19 Type II Error [Validate I-O Transformation] For validation, the power of the test is:  Probability[ detecting an invalid model ] = 1 –    = P(Type II error) = P(failing to reject H 0 |H 1 is true)  Consider failure to reject H 0 as a strong conclusion, the modeler would want  to be small.  Value of  depends on: Sample size, n The true difference, , between E(Y) and  : In general, the best approach to control  error is:  Specify the critical difference,   Choose a sample size, n, by making use of the operating characteristics curve (OC curve).

20 Type I and II Error [Validate I-O Transformation] Type I error (  ):  Error of rejecting a valid model.  Controlled by specifying a small level of significance . Type II error (  ):  Error of accepting a model as valid when it is invalid.  Controlled by specifying critical difference and find the n. For a fixed sample size n, increasing  will decrease .

21 Confidence Interval Testing [Validate I-O Transformation] Confidence interval testing: evaluate whether the simulation and the real system are close enough. If Y is the simulation output, and  = E(Y), the confidence interval (C.I.) for  is: Validating the model:  Suppose the C.I. does not contain    If the best-case error is > , model needs to be refined. If the worst-case error is  , accept the model. If best-case error is  , additional replications are necessary.  Suppose the C.I. contains    If either the best-case or worst-case error is > , additional replications are necessary. If the worst-case error is  , accept the model.

22 Confidence Interval Testing [Validate I-O Transformation] Bank example:   , and “close enough” is  = 1 minute of expected customer delay.  A 95% confidence interval, based on the 6 replications is [1.65, 3.37] because:  Falls outside the confidence interval, the best case |3.37 – 4.3| = , additional replications are needed to reach a decision.

23 Using Historical Output Data [Validate I-O Transformation] An alternative to generating input data:  Use the actual historical record.  Drive the simulation model with the historical record and then compare model output to system data.  In the bank example, use the recorded interarrival and service times for the customers {A n, S n, n = 1,2,…}. Procedure and validation process: similar to the approach used for system generated input data.

24 Using a Turing Test [Validate I-O Transformation] Use in addition to statistical test, or when no statistical test is readily applicable. Utilize persons’ knowledge about the system. For example:  Present 10 system performance reports to a manager of the system. Five of them are from the real system and the rest are “fake” reports based on simulation output data.  If the person identifies a substantial number of the fake reports, interview the person to get information for model improvement.  If the person cannot distinguish between fake and real reports with consistency, conclude that the test gives no evidence of model inadequacy.

25 Summary Model validation is essential:  Model verification  Calibration and validation  Conceptual validation Best to compare system data to model data, and make comparison using a wide variety of techniques. Some techniques that we covered (in increasing cost-to- value ratios):  Insure high face validity by consulting knowledgeable persons.  Conduct simple statistical tests on assumed distributional forms.  Conduct a Turing test.  Compare model output to system output by statistical tests.