Modeling and Simulation: Exploring Dynamic System Behaviour

Slides:



Advertisements
Similar presentations
Simulation - An Introduction Simulation:- The technique of imitating the behaviour of some situation or system (economic, military, mechanical, etc.) by.
Advertisements

Sampling: Final and Initial Sample Size Determination
 1  Outline  Model  problem statement  detailed ARENA model  model technique  Output Analysis.
Output analyses for single system
1 Statistical Inference H Plan: –Discuss statistical methods in simulations –Define concepts and terminology –Traditional approaches: u Hypothesis testing.
Output Data Analysis. How to analyze simulation data? simulation –computer based statistical sampling experiment –estimates are just particular realizations.
Output Analysis and Experimentation for Systems Simulation.
Lecture 9 Output Analysis for a Single Model. 2  Output analysis is the examination of data generated by a simulation.  Its purpose is to predict the.
Experimental Evaluation
1 Simulation Modeling and Analysis Output Analysis.
Simulation Output Analysis
SIMULATION MODELING AND ANALYSIS WITH ARENA
Verification & Validation
Go to Index Analysis of Means Farrokh Alemi, Ph.D. Kashif Haqqi M.D.
Confidence Intervals 1 Chapter 6. Chapter Outline Confidence Intervals for the Mean (Large Samples) 6.2 Confidence Intervals for the Mean (Small.
Chapter 11 Output Analysis for a Single Model Banks, Carson, Nelson & Nicol Discrete-Event System Simulation.
MS 305 Recitation 11 Output Analysis I
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
© 2003, Carla Ellis Simulation Techniques Overview Simulation environments emulation exec- driven sim trace- driven sim stochastic sim Workload parameters.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
Analysis of Simulation Results Chapter 25. Overview  Analysis of Simulation Results  Model Verification Techniques  Model Validation Techniques  Transient.
Determination of Sample Size: A Review of Statistical Theory
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
1 Chapter 7 Sampling Distributions. 2 Chapter Outline  Selecting A Sample  Point Estimation  Introduction to Sampling Distributions  Sampling Distribution.
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
STA Lecture 171 STA 291 Lecture 17 Chap. 10 Estimation – Estimating the Population Proportion p –We are not predicting the next outcome (which is.
1 OUTPUT ANALYSIS FOR SIMULATIONS. 2 Introduction Analysis of One System Terminating vs. Steady-State Simulations Analysis of Terminating Simulations.
Machine Learning Chapter 5. Evaluating Hypotheses
Output Analysis for Simulation
1 Chapter 8 Interval Estimation. 2 Chapter Outline  Population Mean: Known  Population Mean: Unknown  Population Proportion.
Computer Performance Modeling Dirk Grunwald Prelude to Jain, Chapter 12 Laws of Large Numbers and The normal distribution.
CHAPTER 6: SAMPLING, SAMPLING DISTRIBUTIONS, AND ESTIMATION Leon-Guerrero and Frankfort-Nachmias, Essentials of Statistics for a Diverse Society.
Chapter 10 Confidence Intervals for Proportions © 2010 Pearson Education 1.
K. Salahpp.1 Chapter 9 Output Analysis for Single Systems.
Variance reduction techniques Mat Simulation
Sampling and Sampling Distributions
And distribution of sample means
Computer Simulation Henry C. Co Technology and Operations Management,
Chapter 5 Confidence Interval
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Point and interval estimations of parameters of the normally up-diffused sign. Concept of statistical evaluation.
Sampling Distributions and Estimation
Inference: Conclusion with Confidence
CPSC 531: System Modeling and Simulation
Parameter, Statistic and Random Samples
Week 10 Chapter 16. Confidence Intervals for Proportions
Statistics in Applied Science and Technology
TexPoint fonts used in EMF.
Statistical Methods Carey Williamson Department of Computer Science
Interval Estimation.
Chapter 9 Hypothesis Testing.
Econ 3790: Business and Economics Statistics
Discrete Event Simulation - 8
Confidence Intervals Chapter 10 Section 1.
Estimation Goal: Use sample data to make predictions regarding unknown population parameters Point Estimate - Single value that is best guess of true parameter.
Chapter 6 Confidence Intervals.
Evaluating Hypotheses
Virtual University of Pakistan
Chapter 8: Estimating with Confidence
Lecture 7 Sampling and Sampling Distributions
Carey Williamson Department of Computer Science University of Calgary
Hypothesis Testing: The Difference Between Two Population Means
IE 355: Quality and Applied Statistics I Confidence Intervals
Psych 231: Research Methods in Psychology
Chapter 8: Estimating with Confidence
Confidence Intervals for the Mean (Large Samples)
Chapter 6 Confidence Intervals.
How Confident Are You?.
Statistical inference for the slope and intercept in SLR
Presentation transcript:

Modeling and Simulation: Exploring Dynamic System Behaviour Chapter 6 Experimentation and Output Analysis

Generation of Output Data PSOV Time variables Sample variables DSOV Scalar derived from the PSOV’s Output variables are random variables, thus a simulation run provides a sample value Objective is to find a mean (sample mean) for the random variable In reality want to find a confidence interval for the mean (how close is it to the real mean). SSOV Simple Scalar Output Variables Can defined output directly Number of balking customers Number of passengers that left bus stop

Output Data from Multiple Runs

Bounded Horizon Study Characteristics Point Estimate Interval Estimate Observation interval is well defined (implicit or explicit) Transients in the stochastic processes are common The simulation is run n times to produce n independent values for the DSOV Point Estimate Mean of the DSOV output values Interval Estimate Confidence interval for the point estimate or Margin of error for the point estimate.

Point Estimate Select a value n that defines the number of simulation runs that creates n values for the DSOV of interest. Collect the n observed values y1, y2, … yn Compute the point estimate (which is an estimate of the mean of our output variable E[Y]) as: The point estimate is itself a random variable Central Limit Theorem states that the point estimate is Normally distributed provided that the collected values are independent and identically distributed (no matter what the distribution of Y). This will be the case provided that the simulation runs are independent (different seeds for the random number generators set at the start of each simulation run).

Interval Estimate Question – how good is the point estimate? Can compute a confidence interval: Where tn-1,a is obtained from the t-distribution (see Annex 1) n correspond to the number of simulations runs a is related to the level of confidence C, i.e. = (1-C)/2, C has a value between 0 and 1 Level of confidence defined as 100C %: is the probability that the real mean µ falls into the interval

Interval estimate viewed as quality criterion With confidence 100C%, want Thus want interval half-length ζ to be less than ζ* For example select ζ* as a percentage of the point estimate r where r has a value over the range (0,1) Thus, want to find The number of runs are increased to meet the quality criterion r

Applying quality criterion Select a value for r, C, and n (no less than 30). Collect the n observed values y1, y2, … yn Compute If ζ < r , accept the point estimate. Otherwise, increase n and start again.

Example: Kojo’s Kitchen

Steady-State Studies Characteristics Stochastic processes of interest are stationary Right-boundary of observation interval not fixed Length of run can be used to generate required output data and meet quality criterion Need to reach steady-state – the warm-up period Defining experiments Replication-deletion method Batch means method

Warm-up Period Define a period after which output stochastic process of interest has reached steady state Demonstrate with Welch’s Method to define such a period

Reaching Steady-State

Welch’s Moving Average Method Use a small number of replications, say 5 to 10. Select observation interval long enough to reach steady state. Divide the time scale into time cells Time cells should be large enough data points to compute an average In each time cell for each run, compute an average , where i is the index of the time cell and j the index of the simulation run. Compute an overall average for each time cell.

Welch’s Method - continued Define a window w to compute a moving average Smoothes out choppy values

Warm-up Period for Port Model Use 10 replications Observation interval: 15 weeks No apparent transient for the Berth Group size output Due to small size of the group Required w=5 to smooth out graph for tanker waiting times.

Port Berth Group Size:

Port Berth Group Size:

Port Tanker Waiting Time:

Port Tanker Waiting Time:

Port Tanker Waiting Time:

Replication-Deletion Method Define a right-hand boundary to collect sufficient data for providing meaningful DSOV Apply experimentation and output analysis applied to the Bounded Horizon Study Compute point estimate and interval estimate Note: increasing the run length reduces the confidence interval

Modify applying quality criterion Select a value for r, C, n (no less than 30) and initial value for tf (right hand boundary of observation interval). Collect the n observed values y1, y2, … yn Compute If ζ < r , accept the point estimate. Otherwise, increase n or increase the run length and start again.

Batch Means Method Use one single long run Divide the long run into time cells sufficiently long so that values for output variable are IID

Port Model Experimentation and Output Analysis (3 berths)

Port Model Experimentation and Output Analysis (4 berths)

Comparing Two Alternatives Basically define a difference between the output values of each alternative

Interpreting the Results Let If CI(n) lies entirely to the right of zero (all positive) then the result of case 2 exceeds the result of case 1 with a level of confidence given by 100C%. If CI(n) lies entirely to the left of zero (all negative) then the result of case 1 exceeds the result of case 2 with a level of confidence given by 100C%. If CI(n) includes zero then at the level of confidence, 100C%., there is no meaningful difference between the two cases.

Common Random Numbers Seeds to the random number generators are varied to ensure IID of output data from simulation run to simulation run If different seeds are used for the runs of both alternatives, output results will lack symmetry Re-use the same seeds with the random number generators for each alternative Use different random number generators Assign service time random values to attributes at the arrival of consumer entities into the model Repeat the seeds used in the simulation runs of the first alternative for the simulation runs of the second alternative The effect is to reduce the variance in the difference between the alternatives. See the Table 6.6 and Table 6.7 for comparing the two alternatives of the port project.

Comparing Multiple Alternatives With multiple alternatives Can compare to base alternative only Can compare all alternatives with each other Must keep number of comparisons small Overall confidence level parameter C given K comparisons (each with confidence CK). Directly related to the confidence levels selected for each comparison (based on Bonferonni inequality) For the case where all comparison confidence levels are equal, then

Examples – Multiples Alternatives Given 5 alternatives Want 90% overall confidence (C=0.90) Thus Ck used for each individual confidence interval becomes 1 – ( (1-C)/K ) = 1 – (.10/5) = 0.98 (98%) Same reasoning applies to multiple performance measurements Port problem, K=4, since two alternatives and 2 performance measures (group size and tanker waiting time) Used Ck = 0.95 (95%), therefore C = 1-(K*(1-CK)) = 0.80 (80%)

Kojo’s Kitchen Example of Multiple Alternatives Choose 3 comparisons, alternate cases to the base case. Select CK=0.968 which gives a value of C=.904 (i.e. an overall confidence level of 90%) Consider

Results for Multiple Scheduling Alternatives