G.Casale – G.Serazzi 1 Quantitative System Evaluation with Java Modelling Tools Giuliano Casale Giuseppe Serazzi Quantitative System Evaluation with Java.

Slides:



Advertisements
Similar presentations
Advanced Oracle DB tuning Performance can be defined in very different ways (OLTP versus DSS) Specific goals and targets must be set => clear recognition.
Advertisements

Cloud Computing Resource provisioning Keke Chen. Outline  For Web applications statistical Learning and automatic control for datacenters  For data.
Copyright © 2005 Department of Computer Science CPSC 641 Winter PERFORMANCE EVALUATION Often in Computer Science you need to: – demonstrate that.
Distributed Multimedia Systems
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 19 Scheduling IV.
Queuing Theory For Dummies
Multiple constraints QoS Routing Given: - a (real time) connection request with specified QoS requirements (e.g., Bdw, Delay, Jitter, packet loss, path.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
Performance Evaluation
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
1 PERFORMANCE EVALUATION H Often in Computer Science you need to: – demonstrate that a new concept, technique, or algorithm is feasible –demonstrate that.
Measuring Performance Chapter 12 CSE807. Performance Measurement To assist in guaranteeing Service Level Agreements For capacity planning For troubleshooting.
1 Multiple class queueing networks Mean Value Analysis - Open queueing networks - Closed queueing networks.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Chapter 9 Overview  Reasons to monitor SQL Server  Performance Monitoring and Tuning  Tools for Monitoring SQL Server  Common Monitoring and Tuning.
Module 8: Monitoring SQL Server for Performance. Overview Why to Monitor SQL Server Performance Monitoring and Tuning Tools for Monitoring SQL Server.
Towards Autonomic Hosting of Multi-tier Internet Services Swaminathan Sivasubramanian, Guillaume Pierre and Maarten van Steen Vrije Universiteit, Amsterdam,
Computer System Lifecycle Chapter 1. Introduction Computer System users, administrators, and designers are all interested in performance evaluation. Whether.
Introduction to the Enterprise Library. Sounds familiar? Writing a component to encapsulate data access Building a component that allows you to log errors.
(C) 2009 J. M. Garrido1 Object Oriented Simulation with Java.
AN INTRODUCTION TO THE OPERATIONAL ANALYSIS OF QUEUING NETWORK MODELS Peter J. Denning, Jeffrey P. Buzen, The Operational Analysis of Queueing Network.
Task Scheduling for Highly Concurrent Analytical and Transactional Main-Memory Workloads Iraklis Psaroudakis (EPFL), Tobias Scheuer (SAP AG), Norman May.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Software Performance Testing Based on Workload Characterization Elaine Weyuker Alberto Avritzer Joe Kondek Danielle Liu AT&T Labs.
Chapter 3 System Performance and Models. 2 Systems and Models The concept of modeling in the study of the dynamic behavior of simple system is be able.
Utilizing Call Admission Control for Pricing Optimization of Multiple Service Classes in Wireless Cellular Networks Authors : Okan Yilmaz, Ing-Ray Chen.
A Regression-Based Analytic Model for Dynamic Resource Provisioning of Multi-Tier Applications Qi Zhang College of William and Mary Williamsburg, VA 23187,
1 Chapters 8 Overview of Queuing Analysis. Chapter 8 Overview of Queuing Analysis 2 Projected vs. Actual Response Time.
1 Challenges in Scaling E-Business Sites  Menascé and Almeida. All Rights Reserved. Daniel A. Menascé Department of Computer Science George Mason.
Performance evaluation of component-based software systems Seminar of Component Engineering course Rofideh hadighi 7 Jan 2010.
Giuseppe Codispoti INFN - Bologna Egee User ForumMarch 2th BOSS: the CMS interface for job summission, monitoring and bookkeeping W. Bacchi, P.
Queueing Models with Multiple Classes CSCI 8710 Tuesday, November 28th Kraemer.
1 Admission Control and Request Scheduling in E-Commerce Web Sites Sameh Elnikety, EPFL Erich Nahum, IBM Watson John Tracey, IBM Watson Willy Zwaenepoel,
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
© 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Injecting Realistic Burstiness to.
Process Architecture Process Architecture - A portion of a program that can run independently of and concurrently with other portions of the program. Some.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part V Workload Characterization for the Web.
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Internet Applications: Performance Metrics and performance-related concepts E0397 – Lecture 2 10/8/2010.
1 Queuing Delay and Queuing Analysis. RECALL: Delays in Packet Switched (e.g. IP) Networks End-to-end delay (simplified) = End-to-end delay (simplified)
1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member,
JMVA Comprehension and Analysis 475 Software Engineering for Industry - Coursework 1 Zhongxi Ren Tianyi Ma Qian Wang Zi Wang.
OPERATING SYSTEMS CS 3502 Fall 2017
OPERATING SYSTEMS CS 3502 Fall 2017
Operating Systems : Overview
Diskpool and cloud storage benchmarks used in IT-DSS
Processes and Threads Processes and their scheduling
BOSS: the CMS interface for job summission, monitoring and bookkeeping
BOSS: the CMS interface for job summission, monitoring and bookkeeping
Software Architecture in Practice
Performance Evaluation with Java Modelling Tools
CPU Scheduling G.Anuradha
Module 5: CPU Scheduling
Admission Control and Request Scheduling in E-Commerce Web Sites
Operating Systems : Overview
Operating Systems : Overview
Lecture 2 Part 3 CPU Scheduling
Operating Systems : Overview
Operating Systems : Overview
Operating Systems : Overview
Performance Evaluation
Computer Systems Performance Evaluation
Operating Systems : Overview
Operating Systems : Overview
CSE 550 Computer Network Design
Module 5: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

G.Casale – G.Serazzi 1 Quantitative System Evaluation with Java Modelling Tools Giuliano Casale Giuseppe Serazzi Quantitative System Evaluation with Java Modelling Tools Giuliano Casale Giuseppe Serazzi Politecnico di Milano Dip. Elettronica e Informazione Milan, Italy Imperial College London Politecnico di Milano Tutorial – ICPE 2011

G.Casale – G.Serazzi 2 tutorial outline  overview of Java Modelling Tools (  case study 1 (CS1)  case study 1 (CS1): bottlenecks identification, performance evaluation, optimal load  case study 2 (CS2)  case study 2 (CS2): model with multiple exit paths  case study 3 (CS3)  case study 3 (CS3): resource contention  case study 4 (CS4)  case study 4 (CS4): multi-tier applications, web services

G.Casale – G.Serazzi 3 Java Modelling Tools ( CS4 CS1 CS2 CS3

G.Casale – G.Serazzi 4 architecture XML jSIMengine JAVA/JWAT/JMVA JSIMwiz JSIMgraph XML XSLT Status Update “Views” “Model” “Controller” JMT framework

G.Casale – G.Serazzi 5 software development  JMT is open source  JMT is open source, Java code and ANT build scripts at  size: ~4,000 classes; 21MB code; 174,805 lines  subversion  subversion svn co jmt  source tree (root also for help, examples, license information,...) trunk (root also for help, examples, license information,...) src jmt (jMVA algorithms) analytical (jMVA algorithms) (command line wrappers) commandline (command line wrappers) (shared utilities) common (shared utilities) (main algorithms & data structures) engine (main algorithms & data structures) (misc utilities) framework (misc utilities) (graphical user interfaces) gui (graphical user interfaces) (JMCH) jmarkov (JMCH) (application testing) test (application testing)

G.Casale – G.Serazzi 6 core algorithms - jMVA Mean Value Analysis (MVA) Mean Value Analysis (MVA) algorithm (e.g., [Lazowska et al., 1984] )  fast solution of product-form queueing networks  open  open models: efficient solution in all cases  closed  closed models: efficient for models with up to 4-5 classes Product-form queueing networks Product-form queueing networks solvable by MVA  PS/FCFS/LCFS/IS scheduling  Identical mean service times for multiclass FCFS  Mixed models (open + closed), load-dependent  Service at a queue does not depend on state of other queues  No blocking, finite buffers, priorities  Some theoretical extensions exist, not implemented in jMVA

G.Casale – G.Serazzi 7 core algorithms – jSIMengine: simulation  components in the simulation are defined by 3 sections  discrete-event simulation engine external arrivals (open class) queueing station component sections admit serve complete route

 transient filtering flowchart G.Casale – G.Serazzi 8 core algorithms – jSIMengine: statistical analysis [Heidelberger&Welch, CACM, 1981] [Pawlikowski, CSUR, 1990] [Spratt, M.S. Thesis, 1998] Transient (Steady State)

G.Casale – G.Serazzi 9 core algorithms – jSIMengine: simulation stop  simulation stops automatically confidence level maximum relative error traditional control parameters 9

CASE STUDY 1: Bottlenecks identification Performance evaluation Optimal load closed model multiclass workload JABA + JMVA Politecnico di Milano Dip. Elettronica e Informazione Milan, Italy G.Casale – G.Serazzi 10

11 Outline  objectives  system topology  bottlenecks detection and common saturation sectors  performance evaluation  optimal loading G.Casale – G.Serazzi

12 characteristics of the system  e-business services: a variety of activities, among them information retrieval and display, data processing and updating (mainly data intensive) are the most important ones  two classes of requests with different resource loads and performance requirements  presentation tier: light load (less demanding than that of the other two tiers)  application tier: business logic computations  data tier: store and fetch DB data (search, upload, download)  to reduce the number of parameters (and to simplify obtaining their values) we have choosen to parameterize the model in term of global loads L i, i.e., service demands D i G.Casale – G.Serazzi

13 topology of a 3-tier enterprise system G.Casale – G.Serazzi

14 workload parameters i r  resource Loadings matrix: Service Demands, i resources, r classes D ir = V ir * S ir  global number of customers: N=100  system population: N={N 1, N 2 } {1,99} → {99,1}  population mix: β={β 1, β 2 }, fraction of jobs per class,  β variable: study of the optimal load (optimal mix)  asymptotic behavior: β constant, N increasing G.Casale – G.Serazzi

15 Service Demands (resource Loadings) natural bottleneck of class 1 (Storage 2) natural bottleneck of class 2 (Storage 1) Storage 3: potential system bottleneck name of the model G.Casale – G.Serazzi

16 What-if analysis (JMVA with multiple executions) fraction of class 1 requests number of models requested (may be not all not executed) parameter that changes among different executions G.Casale – G.Serazzi

17 Bottlenecks switching (JABA asymptotic analysis) global loadings of class 1 global loadings of class 2 bottlenecks fraction of class 2 jobs that saturate two resources concurrently (Common Saturation Sector) bottlenecks G.Casale – G.Serazzi

18 throughput and Response time {N=1,99}-{99,1}, JMVA class 1 class 2 system Common Saturation Sector class 1 class 2 system Common Saturation Sector throughput X Response times equiload r/ms ms G.Casale – G.Serazzi

19 Utilizations and Power {N=1,99}–{99,1} Common Saturation Sector Storage 3 Storage 1 Storage 2 Utilizations Power (X/R) class 1 class 2 system best QoS to class 1 best QoS to class 2 G.Casale – G.Serazzi

20 optimized load: service demands and bottlenecks multiple bottlenecks equi-utilization line 2 Class G.Casale – G.Serazzi

21 optimized load: U and X equi-utilization mix Storage 1 Storage 2 Storage 3 Utilizations throughput X class 2 class 1 system r/ms 0.48 G.Casale – G.Serazzi

22 optimized load: Response times and Residence times Response times system class 1 class 2 Common Saturation Sector Storage 3 Storage 1 Storage 2 Residence times 4.78 ms ms 0.48 G.Casale – G.Serazzi

CASE STUDY 2: model with multiple exit paths open model single class workload different routing policies JSIMgraph Politecnico di Milano Dip. Elettronica e Informazione Milan, Italy G.Casale – G.Serazzi 23

24 Outline  objectives  system topology  what-if analysis  performance with “probabilistic” routing  performance with “least utilization” routing  performance with “Joint the Shortest Queue” routing G.Casale – G.Serazzi

25 objectives system response time  fallacies in using the index system response time also in single class models multiple  open model with multiple exit paths (sinks), e.g., drops, alternative processing, multi-core, load balancing, clouds,... response time per sinksystem res ponse time  differencies between response time per sink and system res ponse time routing policies  impact on performance of different routing policies G.Casale – G.Serazzi

26 Casale - Serazzi system topology source of requests selection of the routing policy λ = 1 req/s S = 0.3 sec S = 1 sec S = 0.2 sec exponential distributions 0.5 utilizations path 2 path 1

27 What-if analysis settings number of models requested final arrival rate initial arrival rate control parameter enable the what-if analysis G.Casale – G.Serazzi

28 n. of customers N in the two paths (prob. routing) mean N = 9.13 j mean N = 0.37 j path 1 path 2 G.Casale – G.Serazzi

29 Utilizations (per path) with prob. routing path 1 path 2 U = 0.89 U = 0.27 G.Casale – G.Serazzi

30 system Response time (prob. routing) mean R = 5.51 s perf. indices collected no requested precision number of models executed in this run (What-if)

31 Response time per path (prob. routing) mean R = 0.72 s path 1 path 2 mean R = s system response time R = 5.5 sec G.Casale – G.Serazzi

32 Utilizations with “least utilization” routing path 1 path 2 U = 0.41 utilizations well balanced G.Casale – G.Serazzi

33 Response times with “least utilization” routing path 1 path 2 R = 3.55 sec R = 0.88 sec system response time R = 1.5 sec G.Casale – G.Serazzi

34 Utilizations with “Joint the Shortest Queue” routing path 1 path 2 U = 0.61 U = 0.35 G.Casale – G.Serazzi

35 N of customers with JSQ routing path 1 path 2 N = 0.88 N = 0.47 G.Casale – G.Serazzi

36 Response times with JSQ routing path 1 path 2 R = 1.72 sec R = 0.70 sec system response time R = 1.05 sec G.Casale – G.Serazzi

37 CASE STUDY 3 Resource Contention (use of Finite Capacity Regions - FCR) contention of components hardware: I/O devices, memory, servers,... software: threads, locks, semaphores,... bandwidth open model single class workload JSIMgraph Politecnico di Milano Dip. Elettronica e Informazione Milan, Italy

G.Casale – G.Serazzi 38 modeling contention  fixed number of hw/sw components (threads, db locks, semaphores,...)  clients compete for the available component free  request execution time  request execution time: wait time for the next free component + wait time for the hardware resources (CPU, I/O,...) + execution time  request interarrival times exponentially distributed  payload of different sizes (exponentially distributed)  evaluate  evaluate the execution time of requests when the number of clients ranges from 1 to 20 and the number of components ranges from 1 to 10 (∞), evaluate the drop rate and the wait time in queue for the next available component  implement several models with different level of completeness

G.Casale – G.Serazzi 39 threads (resource hw/sw) contention (simple model) server... sink threads = 1÷∞ clients thread requests queue (inside the server)... λ=1÷20 r/s CPU I/O D CPU =0.010s D I/O =0.047s

G.Casale – G.Serazzi 40 model definition (unlimited threads and queue size) λ = 1 ÷ 20 req/sec source of requests queue resource sink name of the model fraction of capacity used selection of perf.indices simulation results fraction of n.o of requests

G.Casale – G.Serazzi 41 input parameters (service demands) mean service time = s mean service time = s

G.Casale – G.Serazzi 42 system Response time (λ=20 req/sec) confidence interval transient duration the number of samples analyzed is greater than the max defined here perf.indexes selected default values of parameters actual sim. parameters

43 λ=1÷20 req/s, unlimited threads & queue size (JSIMgraph) U I/O = λ D I/O = 20*0.047 = 0.94 (exact) Utilization of I/O throughput system Response time same as λ no limitations R = s (sim) (sim) X = r/s system Power R = s (exact) G.Casale – G.Serazzi

44 Number of requests (unlimited threads & queue size) 0.25 req req N = req (sim) N = XR = req (exact)

G.Casale – G.Serazzi 45 set of a Finite Capacity Region – FCR step 1 – select the components of the FCR step 2 – set the FCR region with constrained number of customers drop queue

G.Casale – G.Serazzi 46 FCR parameters global capacity of the FCR max number of requests per class in the FCR drop the requests when the region capacity is reached (for both the constraints)

G.Casale – G.Serazzi 47 system Number of requests (limited n. threads and drop) 5 threads unlimited 10 threads 15 threads

G.Casale – G.Serazzi 48 Utilization of I/O server (limited n. threads and drop) 10 threads unlimited 15 threads 5 threads

G.Casale – G.Serazzi 49 system Response time (limited n. threads and drop) 5 threads 10 threads unlimited 15 threads

G.Casale – G.Serazzi 50 external finite queue for limited threads server... sink threads = 5 clients queue for threads with finite capacity (outside the server) λ=20 r/s server D server =0.047s Blocking After Service policy queue drop policy  the queue for threads is limited (e.g., to limit the number of connections in case of denial of service attack, to guarantee a negotiated response time for the accepted requests,...)  the requests arriving when the queue is full are rejected (drop policy)  the number of threads is limited and the requests are queued in a resource different from the server (load balancer, firewall,...)  evaluate the combination of different admission policies

G.Casale – G.Serazzi 51 set Block After Service (BAS) blocking policy max number of requests in the station station with finite capacity selection of the BAS policy BAS policy: requests are blocked in the sender station when the max capacity of the receiver is reached

G.Casale – G.Serazzi 52 λ=20 req/s NRUXDrop Queue and Server stations Q Qsize= ∞ Q S Ser=5, queue S Q Qsize= ∞ Q S Ser=5, BAS S Q Qsize=5 drop Q S Ser=5, BAS S Q Qsize= ∞ Q S Ser=5, drop S Server Queue ∞ 5 ∞ Server Queue∞ 5 BAS Server Queue5 5 BAS drop Server Queue ∞ 5 drop different admission policies for Queue and Server

G.Casale – G.Serazzi 53 CASE STUDY 4 Multi-Tier Applications and Web Services (Worker Threads, Workflows, Logging, Distributions) closed models single class and multiclass workloads fork-join JSIMgraph+JWAT Politecnico di Milano Dip. Elettronica e Informazione Milan, Italy

G.Casale – G.Serazzi 54 performance evaluation of a multi-tier application performance evaluation of a multi-tier application transactional workload application server (AS) database (DB)  multi-tier application serves a transactional workload which requires processing by an application server (AS) and by a database (DB) worker threads  the AS serves requests using a fixed set of worker threads admission control  requests waiting for a worker thread are queued by the admission control system  utilization measurements  utilization measurements available for the AS and for the DB –know both for AS and DB the average service time S –e.g., linear regression estimate U=SX+Y, U = utilization, X = throughput, Y =noise  evaluate  evaluate response time for increasing worker threads

G.Casale – G.Serazzi 55 transaction lifecycle Worker thread admission time Service time (1) Queueing time DB query time (1) Service time (2) Service time (3) DB query time (2) Server Response time Network latency (1) Network latency (2) Client-Side Application Server Request Response time Request arrives Response arrives Admission control Load context in memory Data access CPU DB Server Worker Thread Simultaneous Resource Possession

G.Casale – G.Serazzi 56 modelling abstraction (easier to define and study) Server admission time Service time (1) Queueing time DB query time (1) Service time (2) Service time (...) DB query time (2) Server Response time Network latency (1) Network latency (2) Client-Side Server-Side Request Response time Request arrives Response arrives Admission control Load context in memory Data access CPU CPU+I/O Application Server Steps DB Server Steps Worker Thread

G.Casale – G.Serazzi 57 modelling multi-tier applications Exponential Distributions Scpu = 0.072s Sdb = 0.032s Zload = 0.015s FCR Admission Policy FCR Capacity FCR 4 Servers (Cores) FCR Admission Queue is Hidden ! PS scheduling N=300 app users send to jMVA simulate

G.Casale – G.Serazzi 58 simulation vs jMVA model FCR not included in product-form model

G.Casale – G.Serazzi 59 SAP Business Suite [Li, Casale, Ellahi; ICPE 2010] M MVA MM S S SIM REAL R R R S Quad-Core Server N=300 users Response Time

G.Casale – G.Serazzi 60 what-if analysis – adding a web service class service composition engine  some requests now access the service composition engine of the multi-tier application to create a business travel plan composed on the fly  services are composed on the fly from external providers (travel agencies, flight booking service) according to a workflow  worker thread remains busy for the entire duration of the web service workflow  evaluate  evaluate end-to-end response time for each class

G.Casale – G.Serazzi 61 business trip planning (BTP) web service FCR Class-Based Admission N=300 app users Nbtp=50 BTP users pBTP=1.0 Sbtp =?, Exp?

G.Casale – G.Serazzi 62 BTP web service sub-model Logger S0=?, Exp? Zsce=0.025s, Exp N=1 WS instance S1=?, Exp? S2=?, Exp?

G.Casale – G.Serazzi 63 jWAT – Workload Analysis Tool Specify Format Column-Oriented Log File Load Data Data Format Templates

G.Casale – G.Serazzi 64 Ignore Negative Samples jWAT – data filtering

G.Casale – G.Serazzi 65 jWAT – descriptive statistics Scatter plots Histogram c=std. dev. /mean Hyper-Exp (c >1)

G.Casale – G.Serazzi 66 Outliers? Scatter plot jWAT – scatter plot

G.Casale – G.Serazzi 67 BTP web service sub-model log inter-arrival times Zsce=0.025s, Exp N=1 WS instance S2=0.911 HyperExp c= S1=2.151, HyperExp c=1.689 S0=0.967 HyperExp c=3.1434

G.Casale – G.Serazzi 68 BTP response times logarithmic transformation e.g., Weibull, Lognormal. Gamma

G.Casale – G.Serazzi 69 response time distribution – logger components Sbtp = 3.611s Gamma c=1.44 timestamp, class id, job id global.csv job id (same throughout simulation) job class logger id

G.Casale – G.Serazzi 70 response time distribution analysis cumulative distribution 95 th percentile [seconds] cdf (matlab)

CONCLUSION Politecnico di Milano Dip. Elettronica e Informazione Milan, Italy 71

G.Casale – G.Serazzi 72 Final remarks  Analysis with Java Modelling Tools ( –Queueing network simulation –Bottlenecks identification –Workload analysis –Mean value analysis –...  JMT-Based examples and exercises (  Topics not covered by this tutorial –jMCH –Burstiness analysis –Trace-driven simulation –...  JMT discussion forum:

G.Casale – G.Serazzi 73 References  G.Casale, G.Serazzi. Quantitative System Evaluation with Java Modelling Tools (Tutorial). in Proc. of ACM/SPEC ICPE 2011 (companion paper).  M.Bertoli, G.Casale, G.Serazzi. User-Friendly Approach to Capacity Planning Studies with Java Modelling Tools, in Proc. of SIMUTOOLS  M.Bertoli, G.Casale, G.Serazzi. JMT - Performance Engineering Tools for System Modeling. ACM Perf. Eval. Rev., 36(4), 2009  M.Bertoli, G.Casale, G.Serazzi. The JMT Simulator for Performance Evaluation of Non Product-Form Queueing Networks, in Proc. of SCS Annual Simulation Symposium 2007, 3-10, Norfolk, VA, Mar  M.Bertoli, G.Casale, G.Serazzi. Java Modelling Tools: an Open Source Suite for Queueing Network Modelling and Workload Analysis, in Proc. of QEST 2006, , Sep  E.Lazowska, J.Zahorjan, G.S.Graham, K.C.Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice-Hall,  K.Pawlikowski: Steady-State Simulation of Queuing Processes: A Survey of Problems and Solutions. ACM Comput. Surv. 22(2): ,  P.Heidelberger and P.D.Welch. A spectral method for confidence interval generation and run length control in simulations. Comm. ACM. 24, ,  S.C.Spratt. Heuristics for the startup problem. M.S. Thesis, Department of Systems Engineering, University of Virginia, 1998.

Contact us! Contact us! Politecnico di Milano Dip. Elettronica e Informazione Milan, Italy 74