Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 1: The Queueing Paradigm The behavior of complex electronic system  Statistical prediction Paradigms are fundamental models that abstract out.

Similar presentations


Presentation on theme: "Chapter 1: The Queueing Paradigm The behavior of complex electronic system  Statistical prediction Paradigms are fundamental models that abstract out."— Presentation transcript:

1 Chapter 1: The Queueing Paradigm The behavior of complex electronic system  Statistical prediction Paradigms are fundamental models that abstract out the essential features of the system being studied. The type of prediction: 1. How many terminals can one connect to a time sharing computer and still maintain a reasonable response time? 2. What percentage of calls will be blocked on the outgoing lines of a small bussiness’s telephone system? What improvement will result if extra lines are added?

2 3. What improvement if efficiency can one expect in adding a second processor to a computer system? Would it be better to spend the money on a second hard disc? 4. What is the best architecture for a space division packet switch? 5. Other? (e-mail servers, Web servers,…)

3 1.2 Queueing Theory Queueing theory has its roots early in the twentieth century in the early studies of the Danish mathematician A. K. Erlang on telephone networks and in the creation of Markov models by the Russian mathematician A. A. Markov.

4 1.3 Queueing Model Customers Calls Jobs Events Connections Requests Tasks Packets

5 Queue: Buffering, Server: Service

6 There three server in parallel

7 M/M/3 The characters in the first two positions indicate: M: Markov statistics, D: Deterministic timing, G: General (Arbitrary) Statistics, Geom: Geometric statistics

8 Network of Queues An “open” queuing network accepts and loses customers from/to the outside world. Thus the total number of customers in an open network varies with time. A “closed” network does not connect with the outside world and has a constant number of customers circulating throughput it.

9

10 A customer that circulates in this queueing network represents the control of the computer terminals

11 Service Discipline First In First Out (FIFO) Last In First Out (LIFO) LIFO service discipline with pre-emptive resume (LIFOPR) Processor sharing (PS) displine

12 State Description The state of a queueing network is a vector indicating the total number of customers in each queue at a particular time instant. Several classes of customers  States? “Design and Implementation of the VAX Distributed File Service” by William G. Nichols and Joel S. Emer in the Digital Technical Journal No. 9, June 1989

13 1.4 Case study I: Performance Model of a Distributed File Service Important operation characteristics + System parameter estimation  Model  Analysis and simulation the system performance The server resources being modeled are: 1.The network interface 2. The CPU 3. The disk subsystem at the server.

14 The response time breakdown of a Web service

15

16 Design Alternatives Design alternatives  Models (adjustable)  Parameters  Analysis, simulation, measurement (artificial workload)  Performance comparison The model distinguishes two types of requests: control operations and data access operations.

17 Design Alternatives 1. Control operations are operations such as open and close file, which have a high computational component. 2. Data access operations are simple reads and writes. Data access operations usually have low computational requirements by require larger data transfer.

18 Design Alternatives Model tractable: Assume the service time at each service center is exponentially distributed. Product-form solution (for single class) Approximate-solution (for multiclass mean value analysis technique)

19 1.5 Case Study II: Single-bus Multiprocessor Modeling “Modeling and Performance Analysis of Single-Bus Tightly-Coupled Multiprocessors” by B. L. Bodnar and A. C. Liu in the IEEE Transactions on Computers, Vol. 38, No. 3, March 1989.

20 Description of the Multiprocessor Model A tightly-coupled multiprocessor (TCMP) is defined as a distributed computer system where all the processors communicate through a single (global) shared memory. A typical physical layout for such a computer structure is shown in Fig. 1.6 Figure 1.7 illustrates our queueing for this single- bus tightly-coupled multiprocessor (SBTCMP) architecture.

21

22 Description of the Multiprocessor Model PE: Processing element BIU: A bus interface unit Each CPU and BIU has mean service rate u(i,1) and u(i, 2), respectively. We also assume the CPU and BIU operate independently each other, and that all the BIU’s in the multiprocessor can be lumped together into a single “equivalent BIU”.

23 Description of the Multiprocessor Model The branching probability are p(i,1), p(i,2) and p(i, 3). The branching probability p(i,3) is interpreted as the probability that a task associated with PE i will join the CPU queue at PE i after using the BIU. 1-p(i, 3) is then the probability that a task associated with PE i will wait for an interrupt acknowledgment at PE i.

24

25 Queueing Model of Multiprocessor Interrupt mechanism: Intertask communication interrupt The state of the task: sleep, ready, execution,… The task sleep in the task pool. If the task requests bus usage, it BIU will enter the BIU queue. If the bus is free, the task will begin using the bus; otherwise, it will wait until the bus released.

26 Queueing Model of Multiprocessor We assume that only one of the preceding events can occurs at a given moment. That is, CPU and interrupt processes cannot occur concurrently. We also assume that a task queued for bus access will not using the CPU. Tasks waiting for an interrupt or undergoing interrupt servicing will be referred to as “interrupt- driven tasks”.

27 Queueing Model of Multiprocessor Tasks waiting for interrupts are modeled by a queue feeding a “lock”. The lock is drawn using a triangle to signify that it is enabled via an external stimulus. The purpose of the lock is to allow only one task to pass by it in response to an interrupt.

28 Queueing Model of Multiprocessor If the task that was forced to release the CPU was an interrupt-driven task, then this pre-empted task will become the first entry on a-last-come-first served queue (i.e., a stack) If the pre-empted task was not an interrupt driven task, then it will become the first entry on the ready list.

29 Queueing Model of Multiprocessor The model not only considers the the bus contention caused by multiple processors attempting to access the shared memory, but also attempts to include realistically the local behavior of the various tasks running on these same processors.

30 1.6 Case Study III: TeraNet, A Lightwave Networks

31 1.7 Case Study IV: Performance Model of a Shared Medium Packet Switch

32 PlaNET Switching Configuration

33

34 Model of Switch


Download ppt "Chapter 1: The Queueing Paradigm The behavior of complex electronic system  Statistical prediction Paradigms are fundamental models that abstract out."

Similar presentations


Ads by Google