Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2006, Monash University, Australia CSE4884 Network Design and Management Lecturer: Dr Carlo Kopp, MIEEE, MAIAA, PEng Lecture 6 Impact of Queueing Behaviour.

Similar presentations


Presentation on theme: "© 2006, Monash University, Australia CSE4884 Network Design and Management Lecturer: Dr Carlo Kopp, MIEEE, MAIAA, PEng Lecture 6 Impact of Queueing Behaviour."— Presentation transcript:

1 © 2006, Monash University, Australia CSE4884 Network Design and Management Lecturer: Dr Carlo Kopp, MIEEE, MAIAA, PEng Lecture 6 Impact of Queueing Behaviour on Switched and Packet Networks

2 © 2006, Monash University, Australia Reading and References Recommended reading on queueing: 1. L. Kleinrock, Queuing Systems, Volumes I & II. New York: Wiley, 1976 2. Thomas G. Robertazzi, Computer Networks and Systems, Springer-Verlag, 1990 3. Lazowska E.D. et al, Quantitative System Performance, http://www.cs.washington.edu/homes/lazowska/qsp/ http://www.cs.washington.edu/homes/lazowska/qsp/ 4. Robert B. Cooper, Introduction to Queueing Theory (2nd edition), 1981. http://www.cse.fau.edu/~bob/publications/IntroToQueue ingTheory_Cooper.pdf http://www.cse.fau.edu/~bob/publications/IntroToQueue ingTheory_Cooper.pdf 5. Ivo Adan and Jacques Resing. Queueing Theory. 2001. http://www.cs.duke.edu/~fishhai/misc/queue.pdf http://www.cs.duke.edu/~fishhai/misc/queue.pdf 6. Myron Hlynka's Queueing Theory Page URL: http://www2.uwindsor.ca/~hlynka/queue.html http://www2.uwindsor.ca/~hlynka/queue.html

3 © 2006, Monash University, Australia Overview 1. Networks of Queueing Systems 2. Performance estimation and modelling 3. Queued system failures – congestion collapse 4. Well designed vs poorly designed queueing systems 5. Risks in analysis and modelling.

4 © 2006, Monash University, Australia Networks vs Single Queues If we consider a router or telephone switch as a queue, a complex network interconnecting many routers or switches amounts to a network of queueing systems. We know understand that individual queueing systems have some interesting properties and behaviours. If we connect many queueing systems together, we also observe some interesting behaviours. The output of a queue is a random process, which becomes an input to a subsequent queue in the network (Burke’s Theorem). We are interested in the total time to traverse multiple interconnected queues. We are especially interested in the impact on total network behaviour of congestion loads in individual queues.

5 © 2006, Monash University, Australia Cct Switched System as a Network of Queues

6 © 2006, Monash University, Australia Cct Switched System as a Network of Queues

7 © 2006, Monash University, Australia Tandem Queues / Parallel Queues

8 © 2006, Monash University, Australia Parallel Queues (Adan & Resing) Mean length of queue: Mean waiting time in queue: Where C is the number of servers in queue.

9 © 2006, Monash University, Australia Performance Estimation and Modelling The starting point for any model intended to describe a network is understanding the behaviour of the network and its topology. The first question to ask is what kind of network and queueing behaviour can we expect? Is it a packet switched or circuit switched network? The answer determines the behaviour of the network and usually, the statistical properties of the traffic. The second question to ask is what kind of topology does the network have. The answer determines how the queues are interconnected. We are then in the position to model or simulate the performance of the network.

10 © 2006, Monash University, Australia Topology vs Queueing In most practical networks you will design, you will encounter mostly ‘tree-like’ topologies. A router or switch will have multiple interfaces to networks one level down in the tree, and will effectively ‘aggregate’ traffic flowing up and down across that node in the tree. In effect, you end up with a combination of parallel queues (in the router or switch) and tandem queues (as you traverse the tree). The problem of tandem queues is important from the performance perspective. In we have a chain of N tandem queues, and one of them is congested, it will impact performance across the whole chain.

11 © 2006, Monash University, Australia Topology vs Queueing

12 © 2006, Monash University, Australia Effects of Congestion As we have observed in the previous lecture, as the utilisation ρ of a queue approaches 1, the waiting time increases as does the number of jobs in the queue. If we have a network where a long chain of queues must be traversed between two points, if any specific queue is congested it impacts the whole chain. The congested queue becomes a ‘bottleneck’ in the network. If the bottleneck is severe enough it can render the network unusable. This is why sizing network performance is so important to a designer and manager. If even one device is congested due to poor design choices, the whole network is compromised.

13 © 2006, Monash University, Australia Response Times, Propagation Delays A simple way of looking at the performance problem is to look at response times seen by users. Assume a user clicks mouse button and a computer at the other end of the network responds with a flurry of packets to render a webpage or database entry on the user’s screen. Response time is the time elapsed between the user’s keystroke/mouse-click and the point when the last packet is received and the screen fully rendered. We can reasonably assume that the time to render on the user’s desktop is negligible compared to network delays.

14 © 2006, Monash University, Australia Response Times, Propagation Delays To understand the performance problem, we have to trace the path between nodes in the network to the remote server the user is connected to, and the path back to the user’s machine. Usually they are one and the same. How do we determine the total delay from user to server, and server to user? We have to count each and every delay incurred along the path to the server, and back. Strictly, we also need to count the delay in the server host. The waiting times in the internal queues of each and every router along the path are determined by the performance of the routers and how congested they are.

15 © 2006, Monash University, Australia Response Times, Propagation Delays If we assume a trivial system and make the simplifying assumption that all traffic is Poisson, then we can look at the length in each queue, and then calculate the time spent waiting. We then sum up the delays across each and every queue along the path. How do we determine each delay? We look at the performance of the router (queue) in terms of how much traffic it can carry (ie μ), and how much traffic it is carrying. We also must understand the type of traffic ‘source’ and how it behaves. How would a VoIP source differ from an MPEG source?

16 © 2006, Monash University, Australia Congestion Collapse Events The most severe category of problem which can be observed is a ‘congestion collapse’ in a network. Such events occur when a network becomes heavily loaded, and routers (or switches) start discarding traffic as they are saturated. Discarding of traffic might be rejecting phone calls with busy tones, or dropping packets once their ‘time to live’ has expired. Traffic sources usually retry when they detect discarded traffic. If a traffic source is aggressive in its retry policy, the congested network is hit with additional traffic. The result is a ‘congestion collapse’ in a network.

17 © 2006, Monash University, Australia Congestion Collapse Events Congestion collapses were observed in the early days of the Internet, and resulted in changes to quite a few protocols to alter retry policies (eg ‘beat down’). Telephone networks have also experienced such problems (‘911 situations’) where large numbers of callers saturate an area. While many modern protocol and equipment designs have built in mechanisms to avoid congestion collapses, a good designer should always consider the impact of worst case traffic loads. Recover from a severe congestion collapse may require ‘rebooting’ large portions of the network.

18 © 2006, Monash University, Australia Well vs Poorly Designed Networks Well designed networks are usually taken for granted. Traffic experiences small delays, which progressively increase as the traffic load increases. Even at very heavy traffic loads, the network has evenly distributed delays across nodes. As the load abates, delays progressively decline. A poorly designed network will see one or more bottleneck nodes in its design. As traffic load increases, one or more bottleneck nodes saturates with traffic and experiences rapid increases in local delays. The result is poor performance across the whole network.

19 © 2006, Monash University, Australia Risks in Analysis and Modelling A large and complex network, with diverse traffic loads, can be difficult to accurately analyse, model or simulate. Often there is considerable uncertainty about future traffic loads and the composition of the traffic load. Often the network will be used differently to how it was intended to be used when designed. An example might be a network heavily used for VoIP traffic, but originally designed to carry HTTP traffic from web servers. Discussion? What choices would designers have made? A safe design strategy is to consider a range of traffic loads and how they will impact performance.

20 © 2006, Monash University, Australia Tutorial Discussion and Q&A Case Studies

21 © 2006, Monash University, Australia Example 1 - Conveyor and Hopper Model Transaction Arrivals Storage Hopper Used to buffer the irregular arrivals into the metering device Metering Device Allows one item to pass each revolution of the roller If no items are present, then nothing is passed on that cycle. Conveyer Belt Output Rate On average will not exceed the slower of: a.arrivals; or b.metering rate.

22 © 2006, Monash University, Australia Example 2 - Short term overloads Irregular Transaction Arrivals averaging 2 per second Storage Hopper empty 50% of time Metering Device four per second, (when available) Conveyer Belt Output average two per second Normal Operation - Average Arrival rate is less than metering rate

23 © 2006, Monash University, Australia Example 3 - Sustained (or Severe) Overload Transaction Arrivals averaging 5 per second Storage Hopper gradually fills up (at average rate of 1 per second) and eventually overflows Metering Device 4 per second Conveyer Belt Output 4 per second Overload Operation - Average Arrival rate exceeds metering rate

24 © 2006, Monash University, Australia Example 4 - Effect of Zero Size Hopper Transaction Arrivals averaging 5 per second No Storage Hopper Capacity other than item being processed Metering Device 4 per second Conveyer Belt Output 4 per second Rejected or lost arrivals With zero size hopper, this system becomes a ‘loss system’ in that any traffic not handled immediately is lost

25 © 2006, Monash University, Australia Example 5 - Handling Overloads Excess traffic may be:  Queued in some (more or less) orderly manner (including ordered, priority and random queuing) Known as a Queueing or Delay System eg computer systems  Rejected and re-tried automatically within a short time Known as a Bidding Contention system eg CSMA/CD, Ethernet  Rejected and lost from the system (assumed to be lost, but usually tried some considerable time later) Known as a Loss System eg older type (electro-mechanical) telephone exchanges These are theoretical, perfect case situations.  In practice most systems are primarily one or another type, but with variations, and they default to Loss Systems under extreme situations

26 © 2006, Monash University, Australia Packet Switching Concepts Records broken into packets at the PAD before transmission On receipt of a packet, the node checks for errors etc. Nodes switch traffic packets by interpreting the packet header Each packet is processed and switched independently At the destination PAD, data is re-assembled into records. 1 2 3 4 5 6 7 PAD A 1 2 3 4 5 6 7 PAD B 1 2 3 4 Node 1 1 2 3 4 Node n

27 © 2006, Monash University, Australia Packets and Hops Spreadsheet RECEIVING SITE Packets finishing transmission over each hop SENDING SITE Packets completely received H1H2H3H4H5 Packets queued for transmission 0P1P2P3P4P5P6P7 P8P8P9 1P1 P2P3P4P5P6P7 P8P8P9 2P1P2 P3P4P5P6P7P8 P9P9 3P1P2P3 P4P5P6P7P8P9 4P1P2P3P4 P5P6P7P8P9 5P1P2P3P4P5 P6P7P8P9 6P1P2P3P4P5P6 P7P8P9 7P1P2P3P4P5P6P7 P8P9 8P1P2P3P4P5P6P7P8 P9 9 P1P1P2P3P4P5P6P7P8P9 10 P1P1 P2P2P3P4P5P6P7P8P9 11 P1P1 P2P2 P3P3P4P5P6P7P8P9 12 P1P1 P2P2 P3P3 P4P4P5P6P7P8P9 13 P1P1 P2P2 P3P3 P4P4 P5P5P6P7P8P9 14 P1P1 P2P2 P3P3 P4P4 P5P5 P6P6P7P8P9

28 © 2006, Monash University, Australia Packets and Hops (animated) RECEIVING SITE Packets finishing transmission over each hop SENDING SITE Packets completely received H1H2H3H4H5 Packets queued for transmission 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 P1 P3 P4P5 P6 P7P8 P9P2 P1P3P4P5P6P7P8P9P2P1 P2P3P4P5P6P7P8P9P2P1 P3P4P5P6P7P8P9P3P2P1 P9P8 P7 P6P5P4 P3P2P1 P9P8P7P6P5P4 P3P2P1 P9 P8P7P6P5 P4P3P2 P1P9P8P7 P6P5P4P3P2 P1 P9P8P7P6P5P4P3P2P1 P4 P3P2 P1 P4P5P6P7P8P9 P5P4P3P2P1P5P6P7P8P9 P6 P5P4 P3 P2P1 P6P7P8P9 P7 P6P5 P4 P3P2 P1 P7 P8P9 P8 P7P6 P5 P4P3 P2P1P8 P9 P8P7P6P5P4P3P2P1P9


Download ppt "© 2006, Monash University, Australia CSE4884 Network Design and Management Lecturer: Dr Carlo Kopp, MIEEE, MAIAA, PEng Lecture 6 Impact of Queueing Behaviour."

Similar presentations


Ads by Google