We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byReanna Powe
Modified about 1 year ago
© 2006, Monash University, Australia CSE4884 Network Design and Management Lecturer: Dr Carlo Kopp, MIEEE, MAIAA, PEng Lectures 7-8 Switched Network Topology, Design, Modelling and Performance Concepts
© 2006, Monash University, Australia Tools Westbay Free Traffic Calculators: http://www.erlang.com/calculator/ http://www.erlang.com/calculator/ Westbay White papers: http://www.erlang.com/support.html http://www.erlang.com/support.html Erlang: Traffic and Queuing Software: http://members.iinet.com.au/~clark/ http://members.iinet.com.au/~clark/ Erlang for Excel ™: http://www.voip- calculator.com/excel.htmlhttp://www.voip- calculator.com/excel.html
© 2006, Monash University, Australia Why Circuit Switched System Design? Historically, sizing of circuit switched systems was mostly used in the design of telephone networks. With the advent of cell based ATM networks, and digital multiplexing over fibre, Telco focus is away from circuit switching. VoIP however still needs load sizing. Continuing uses for this technique: 1. Sizing of conventional PABX designs, VoIP sizing. 2. Sizing of operator call centres and resulting loads. 3. Expansion/resizing of legacy circuit switched systems where cost of full replacement is prohibitive. 4. Understanding load behaviour on large database and multiuser host systems.
© 2006, Monash University, Australia Topology vs Point-to-Point Links Basic building blocks Eg the Lego blocks of networking
© 2006, Monash University, Australia Point to Point links Simplest Concept - one communications circuit joining each two node points Each link independent of others If nodes can switch traffic, then we have switched network Independent Point-to-Point links A-B and B-C NOTE: Do not assume Node B is a switch Node A Node B Node C
© 2006, Monash University, Australia Dedicated Point-to-point Examples Dedicated means “used exclusively” - Examples: 1. Direct connected printer on desktop computer 2. ‘Leased line’ from Melbourne to Sydney (passes through many telephone exchanges, but is direct-patched at each - therefore “point-to-point”) 3. Broadcast is from one point to many on same link eg Radio and TV, old ‘newswire services’ 4. A “bus” is many points all connected on one line - eg Computer motherboard bus, Multi-drop, Polled network, Ethernet 5. Telephone line from exchange to home is a point-to- point circuit, used mainly for voice traffic 6. Queueing Theory can be applied to these links when links are dedicated and used for data (and digitised voice)
© 2006, Monash University, Australia Non-Dedicated Point-to-Point Links Sometimes apparent point-to-point links may involve shared bandwidth, or statistical multiplexing eg: 1. VPN carried over the internet - from each end to local node is possibly dedicated bandwidth, but transfers between nodes are multiplexed in a packet switched network (statistical multiplexing) 2. Some Telco services are statistically multiplexed (eg those with variable bandwidth charging) 3. Queueing Theory does apply to Time Division Multiplexed (TDM) and Frequency Division Multiplexed (FDM) links which give constant link bandwidth.
© 2006, Monash University, Australia Dedicated Point-to-Point Links Traffic may be addressed or unaddressed items 1. Addressed traffic is switchable - eg packetised data 2. Unaddressed is not, of itself, switchable eg analogue voice Record formats, lengths and characteristics not constrained. Line speed (“bandwidth”) is determined by capability of the circuits and termination equipment. Frequently used as the main trunk routes between nodes in switched networks 1. Message switched 2. Packet Switched 3. Circuit switched
© 2006, Monash University, Australia Multiple ‘Parallel’ Point-to-Point Multiple circuits connecting same two nodes Logically parallel (but may be physically different) eg Examine each case on its merits eg reliability, cost, availability of circuits and bandwidth Node A Node B 1 2 3 Physically different Example: Line 1 - Primary link - landline Line 2 - Secondary - eg radio Line 3 - Emergency dial up Node A Node B Logically Parallel 3 logically parallel links Dashed lines indicates a difference
© 2006, Monash University, Australia Extending Simple Point-to-Point “Broadcast” - to disperse widely
© 2006, Monash University, Australia Broadcast Links Data is transmitted by one node and can be read by any connected node If A broadcasts a transmission, it is read by B, C and D One Dedicated Point-to-Point link A-B-C-D NOTE: Do not assume any node is a switch Node A Node B Node C Node D
© 2006, Monash University, Australia Broadcast Common Implementations Radio and TV broadcasting (‘A’ is radio transmitter or cable signal source - B, C, D are receive only) Broadcast news services eg ‘Newswire services’ (A is the service provider - B C D are receive only) Polled networks (A is the master controller - B C D are ‘terminals’ that only transmit in response to polling interrogation from master controller ) Multi-drop networks and Bus networks (A is the master computer - B C D are fundamentally receive only) Bidding-Contention networks eg Ethernet (All notionally equal, can be prioritised by using differing back-off times) Node A Node B Node C Node D
© 2006, Monash University, Australia Broadcast Issues Security is an issue, Encryption needed for privacy ‘spoofing’ and impersonation easy - Terminals generally dumb, not able to implement good protocols For two way traffic: Used where utilisation of each individual connection is low, and individual line costs are relatively high eg: Automated Teller Machines are usually a polled multi drop network, POS Terminals etc are either dial up (low usage) or polled multi drop terminals (many transactions per day) Determination of which terminal gains line control or responds is moderately complex (eg polling software) Bidding Contention systems (eg Ethernet) avoid software complexity by simple back-off protocols if line is busy or collision detected - at cost of utilisation efficiency.
© 2006, Monash University, Australia Switched Networks Interaction between nodes and ‘data’ provides design flexibility. Traffic may be addressed or unaddressed items 1. Addressed traffic is switchable - eg packetised data 2. Unaddressed is not, of itself, switchable eg analogue voice
© 2006, Monash University, Australia Switched Networks To switch traffic requires two things which work together: Nodes with switching capability Traffic characteristics which trigger the node to switch traffic Switching nodes: Circuit Switches eg Telephone exchanges, PABXs Packet switches eg Routers, ATM Switches etc Message Switches - eg Email Servers, Traffic characteristics that trigger switching Incoming circuit identification, Date/time, data format etc Something not associated with information content of traffic items) Packet and message headers (“addressed traffic”) Telephone call dialing tones and pulses, pilot messages (circuit switches use traffic preamble (dialing tones/pulses, or ‘pilot messages’) to establish the circuit, then traffic switches “unaddressed”)
© 2006, Monash University, Australia Switching Techniques & OSI Layers Different switching techniques apply at different OSI layers Switching nodes increase in functionality and complexity as the switching is applied higher in the protocol stack - ie simplest for circuit switching, most complex for message switching 1 2 3 4 5 6 7 Circuit switches - (eg PABXs) Packet switches (eg Routers etc) Message switches (eg Email servers, gateways)
© 2006, Monash University, Australia Typical Call Centre Arrangement Call centre model – ACD is PABX type system Customers Calling for Service Automatic Call Distributor (ACD) PST N “Trunks” Computer Data base “CALL CENTRE”
© 2006, Monash University, Australia Call Centres The Call Centre has incoming calls from customers/clients a mechanism to accept the call from the customer, and distribute it to available operators (ACD or PABX) operator or ‘agent’ positions, fitted with computers or other resources to help satisfy customers needs (typically computers and network to database servers) Incoming calls are accepted and queued (as a single queue) until an operator or ‘agent’ is available Thus Call Centres are a single queue with multiple servers Can apply queueing theory to estimate sizing.
© 2006, Monash University, Australia Call Centre Parameters The significant variables that affect call centre dimensioning include: characteristics of input traffic variability or randomness of call arrivals how long the caller is prepared to wait before being served (Often called “Target Wait Time”) characteristics of the service being provided time taken to service the call, variability or randomness of service time time taken between calls for ‘wrap up’ activity, operator rest break etc The concept of ‘occupied capacity’ applicable to both trunks and operators is important..
© 2006, Monash University, Australia Shortcomings of Averages Previous lectures referred to ‘average’ delay times and queue sizes. Often we need to estimate the time within which 90% (or some other figure) of the traffic will get through the system. This is usually more meaningful than the ‘average’ or ‘mean’. We are interested in how many customers are queueing up for service and how long they must wait. Queueing models allow us to estimate what fraction or proportion of customers will be delayed by what amount of time.
© 2006, Monash University, Australia Probability of Delays Delays - seconds Percent of transactions Probability of delays For ‘random’ system with average delay of 1 second Another way to show the same info is “100% less cumulative” which leads to ‘delayed more than x’ eg 37% are delayed more than one second 14% delayed more than two seconds; and 5% delayed more than three seconds Cumulative - ‘delayed x or less’ eg 39% delayed half a second or less 63% delayed one second or less,
© 2006, Monash University, Australia Spread or Dispersion of Queueing Times The spread or dispersion of queuing times is estimated by considering the ‘incomplete gamma function’ The basic parameter required is ‘standard deviation’ of delays This is significantly affected by: Number of servers Variability of times between requests (inter-arrival times) Variability of Service times Queuing Discipline (dispatching discipline)
© 2006, Monash University, Australia Dispersion of Queueing Times (2) For the case of: Single server queue (this is important) Random inter arrival times (exponential or Erlang-1) Random (exponential) service times First-in/first-out dispatching THEN Standard Deviation of Queuing time S(x) = Mean Service Time * Mean Queue Size = Mean Service Time * (1+(p/(1-p))) ie: The Standard Deviation equals the mean. (This is true for ‘exponential’ or Poisson traffic only.) Standard deviation for other traffic characteristics is a little more involved...
© 2006, Monash University, Australia ERLANG Distributions 12345678910 123456789 13579 2468 27 38 49 5 16 Distribution is Erlang-2 Distribution is Erlang-5 As the number of channels increases, the inter-event time per output channel approaches a constant value ie Erlang-("infinity") = constant inter-event times ArrivalsOutputs Random, Exponential or Erlang-1
© 2006, Monash University, Australia The Erlang and Gamma Distributions: ‘Erlang-n’ distribution is a special case of the general GAMMA distribution. The Gamma distribution has a parameter R, calculated as follows: R = (E (x) /s (x) ) 2 where E (x) is the mean response time, and s (x) is the standard deviation of response times If : R is greater than 1 then activity is smoother than random (infinity if the inter-arrival times are constant), R=1 then the activity is random, and R is less than 1, the activity is rougher than random
© 2006, Monash University, Australia The Erlang and Gamma Distributions Erlang-n distributions are those cases when R is an integer (This is sometimes known as ‘Erlang’s Constant’) Use Gamma charts or tables for integer R (Erlang-n distributions) and non-integer R (General Gamma distributions) These charts and tables will be handed out during the lecture The Cumulative graph shown on the earlier slide is the Gamma curve for an Erlang-1 distribution (ie when R=1)
© 2006, Monash University, Australia Use of Gamma Tables/Graphs A very useful approximation of the probability of queues exceeding certain sizes or response times being within certain constraints is given by using using Gamma Graphs or Tables Steps: a. Estimate the mean response time E (x) b. Estimate the standard deviation of queuing time s (x) c. Select a Gamma distribution with the parameter R = (E (x) /s (x) ) 2 d. Use tables or graphs of the Gamma distribution to solve the problem NOTE:This method is approximate only - but results are usually within 10% in practice. More accurate alternatives are: Mathematical calculations (very tedious, and probably inappropriate) Simulation (excellent)
© 2006, Monash University, Australia Example - Use of Gamma Graphs Example 1: We have measured the average response time of a real time communications system to be 3 seconds, and the standard deviation to be 1.5 seconds. What percentage of transactions will exceed: a.6 seconds b.9 seconds c.12 seconds Basic information given in problem statement: E (x) = 3 (Mean response time),s (x) = 1.5 (Standard Deviation) From these, R = (E (x) /s (x) ) 2 = (3/1.5) 2 = 2 2 = 4 Therefore use the R=4 curve on the Gamma graph ( http://en.wikipedia.org/wiki/Gamma_distribution ), (or on the next slide) http://en.wikipedia.org/wiki/Gamma_distribution
© 2006, Monash University, Australia Gamma Graph for R=1 & R = 4 Partial Gamma Graph Time / Average delay (T/E x ) Percent of transactions R=1 R=4
© 2006, Monash University, Australia Use of Gamma Graphs, Example 1 (Cont) For case a. T = 6, T/E (x) = 6/3 = 2 Using the R=4 curve on the Gamma graph, for T/E (x) = 2, the probability (t>T) is about 0.96 (Tables give 0.9576) Therefore about 1.000-0.96 =4% exceed 6 seconds. Time / Average delay (T/E x ) Percent of transactions
© 2006, Monash University, Australia Case a expanded R=1 R=4 Time / Average delay (T/E x ) Percent of transactions
© 2006, Monash University, Australia Use of Gamma Graphs, Example 1(Cont) For case b. T = 9, T/E (x) = 9/3 = 3 the probability (t>T) is about 0.999 - just below 100% We could say virtually none exceed 9 seconds NOTE: - Tables give 0.9977, therefore about 0.23% exceed 9 seconds Such a figure could not be read from this graph. For case c. From studying these graphs, none could be said to exceed 12 seconds.
© 2006, Monash University, Australia Gamma Graphs and Tables It is very difficult to accurately read the graphs, particularly when the figure of interest is the difference between the readout and 100%. ie A small error in reading the graph gives a large percentage error in the final result. That is why it is better to use the tables. Consider case b above. If the graph had been read as.999, the estimate of t>T would have been over 50% in error.
© 2006, Monash University, Australia Use of Gamma Tables, Example 1 As a designer, you have to satisfy the following requirements: “The network shall have an average response time within 3 seconds, and 90% of responses within 4 seconds” This seems reasonable at first sight. Your task is to select an appropriate line speed. To do so requires that you determine the average delay that will be used in the line speed calculations. This is not easy - as the next slides show.. For a start, there are actually two requirements stated. average within 3 seconds; and 90% within 4 seconds
© 2006, Monash University, Australia Use of Gamma Tables, Example 1 Attempt 1 “A network is to have an average response time within 3 seconds, and 90% of responses within 4 seconds.” Try #1 - aim to meet both of these requirements exactly: 1. E (x) = 3,s (x) is unknown 2. R = (E (x) /s (x) ) 2 = (3/ s (x) ) 2 3. For the 90% limit, T = 4, T/E (x) = 4/3 = 1.33 4. Using the 90% probability on the Gamma graph, for T/E (x) = 1.33, 5. approximate value of R = (E (x) /s (x) ) 2 required is between 15 & 20. Therefore (3/ s (x) ) 2 = 20 (using 20 as the R value) s (x) = 0.67
© 2006, Monash University, Australia Use of Gamma Tables, Example 1 Attempt 1 Summary: This approach for an average response time of 3 seconds, requires a standard deviation of 0.67 seconds. This is most unusual, if it is even possible in practice.
© 2006, Monash University, Australia Gamma Tables, Example 1, Attempt 2 “A network is to have an average response time within 3 seconds, and 90% of responses within 4 seconds”. Note the emphasis on ‘within’ Try #2 Aim for a network with an average response time of something less than 3 seconds, and ensure that 90% are within 4 seconds. (Nobody will complain!) Let standard deviation equal the response time, as we know that this is the pattern for truly random (exponential) distributions.. Therefore E (x) = 3 or less, and s (x) = E (x) R = (E (x) /s (x) ) 2 = (1) 2 = 1 For the 90% limit, T = 4, T/E (x) = 4/ s (x) Using the 90% probability on the Gamma graph, and curve for R = 1, the approximate value of T/E (x) = 2.3 = 4/ s (x) Therefore s (x) = 4/2.3 = 1.74, say 1.75 seconds
© 2006, Monash University, Australia Gamma Tables, Example 1, Attempt 2 Summary: If we dimension the network for an average response time of 1.75 seconds, with a standard deviation of about 1.75 seconds, we can achieve 90% of transactions completed within 4 seconds, assuming all network response times are truly random (Poisson).
© 2006, Monash University, Australia Gamma Tables, Example 1, Attempt 3 Try #3 - Try #2 was workable, but in practice networks and their host computers are usually a little better than truly random. They frequently operate between R=1.2 and R=2 Let us explore this approach, and see what happens if relax the average response time to be used in design work. Doing so may reduce costs. Note that we are making a judgement here that R not greater than 2 is OK. Let us assume R=2. Assume 2 Seconds average response time. What is the Std Deviation required to meet the requirement of “90% within 4 seconds” ? For this try, E (x) = 2, s (x) = s (x) Therefore R = (E (x) /s (x) ) 2 = (2/ s (x) ) 2 For the 90% limit, T = 4, T/E (x) = 4/2 = 2 Using the 90% probability on the Gamma graph, and for T/E (x) = 2 the approximate value of R needed is 2, as explained above. Since R = (E (x) /s (x) ) 2 = (2/ s (x) ) 2 = 2 therefore s (x) = 1.414 seconds
© 2006, Monash University, Australia Gamma Tables, Example 1, Attempt 3 Summary: If we dimension the network for a design average response time of 2 seconds, we have a high probability of meeting the ‘90% within four seconds’ requirement.
© 2006, Monash University, Australia Summary of the 3 approaches In order to meet the requirements of 90% within 4 seconds, and an average time within 3 seconds, we have calculated three answers, and now include a fourth for comparison: Avg Time (Seconds) Std Deviation (Seconds) Erlangs constant R Comments Is it achievable in practice? 1.75 1Yes, but perhaps too pessimistic 21.412Better and more realistic. BE CAREFUL 30.6720Standard deviation is too tight DO NOT USE for network design 30infinityConstant response time is unachievable Networks do not operate this way
© 2006, Monash University, Australia Limitations of These Techniques The techniques used here are effective and adequately accurate for most cases. However: They are approximations only They assume random behaviour ie random arrivals and service times (peak loads may be different – eg call centre problem). They assume infinite memory to store queues (usually this runs out since real systems are finite). Use these techniques to gain coarse estimates of behaviour for the system, and how the system will behave under various loads
© 2006, Monash University, Australia Queueing Notation Queues are described by a notation of the form: X/Y/Z (refer lecture 6). where: X = Input Distribution M = Exponential or Random C = Constant G = General Y = Service Time Distribution M, C, G, etc as above Z = Number of Servers The commonest queue is one with random input distribution, random service times, and one server - this is an M/M/1 queue.
© 2006, Monash University, Australia Erlang C formula and Call Centres Call centres are widely used, but the queueing and occupancy issues are rarely appreciated. Most organisations do not understand the interaction between staffing levels and customer wait times. As managers or designers one of the problems you will have to confront is that clientele may have expectations which the mathematics of queueing do not support. The result of this mismatch between fact and expectation can often be interesting….
© 2006, Monash University, Australia Measures of Traffic Loading-“Erlangs” Key factor is the concept of OCCUPIED CAPACITY eg 2 x 10 minute calls are equivalent to 1 x 20 minute call, as far as `occupation’ is concerned. Traffic loading measured in ERLANGS - Abbreviation ‘E’ Erlangs of traffic (or Traffic Load) = # Calls per unit time * Avg Holding Time per call eg20 calls in 60 minutes, average holding time = 3 minutes = (20/60) * 3 = 1 Erlang = 1E If all the traffic items neatly stacked end-to-end totally occupied the link for the period considered, then there is one Erlang of Traffic.
© 2006, Monash University, Australia Erlang Calculation Examples 30 calls in 120 minutes, average holding time = 4 minutes = (30 / 120) * 4 = 1 E 20 calls in 60 minutes, average holding time = 9 minutes = (20 / 60) * 9 = 3 E 5 calls in 30 minutes, average holding time = 3 minutes = (5 / 30) * 3 = 0.5 E 4 Erlangs of Traffic is measured over a period of 3 hours. There were 120 calls during the time. What was the average holding time? (6 minutes)
© 2006, Monash University, Australia Erlangs and Utilisation Erlangs are analogous to Utilisation as used in delay systems queuing. The concept of utilisation (as discussed in ‘delay systems’) is related to Erlangs as follows: Utilisation = (load in Erlangs) / (number of servers) or Erlangs of load = Number of servers * Utilisation eg 30% utilisation is EQUAL to: 0.3 Erlangs of load on a single server queue, or 0.6 E on a two-server queue, or 1.2 E on a four-server queue, or 3.0 E on a 10 server queue
© 2006, Monash University, Australia Erlang C Formula A widely used formula for estimating the wait times and other dimensioning factors of this system is the Erlang C formula (shown here for background information only) Let A = Erlangs of Offered Traffic Let M = Number of Servers Then Probability of Delay A M A M A X A M M X X M M ( ! ).( () ) ( ! )( () ). ! 1 1 1 1 0 1
© 2006, Monash University, Australia Erlang C Formula (2)
© 2006, Monash University, Australia Erlang C Assumptions and Limitations Erlang C assumes: Random (Erlang-1) traffic arrivals Random (Erlang-1) service times First-in/first-out (FIFO) queue discipline All traffic waiting will continue to wait until serviced. Limitations and practicalities Above assumptions are somewhat conservative in practice, and lead to an over-capable installation (ie a call centre designed to the formula can usually handle traffic a little better than the formula indicates) Various proprietary formulas are used by specialist companies. These are usually derived from Erlang C, and take into account ‘baulking’ situations where customers discontinue their wait when it is too long
© 2006, Monash University, Australia Erlang C Implementations Call Centre Managers usually want to know Average wait times Service levels - percentage of calls answered within target wait times Agent or operator busy-ness (Agent occupancy - or “Utilisation”) Calculating using Erlang C formula is difficult and error prone There are many Erlang C calculators on the Internet. These usually give a single answer to a specific set of parameters - which is good, but does not teach the relationships between parameters Erlang-C spreadsheet ( available via the subject web page) plots graphs showing relationships over a range of parameter values
© 2006, Monash University, Australia Queueing Theory in Practice
© 2006, Monash University, Australia Practical Queuing Theory (1) Remember basic ‘single server’ formulas, as single server queues are the most common case. Avg Queue Size = (Util / (1-Util)) items Avg Delay time = (1+qsize) * (service time) These formulas give conservative but reasonably accurate approximation for multiple servers provided that utilisation is less than 50% Generally don’t plan for more than 30 to 50 per cent utilisation for single server queues. Check the graphs to see where multiple server queues turn around the ‘knee’ and always stay below that limit. Drawing a line from the 50% utilisation of a single server to the 100 per cent mark on the X axis gives a conservative upper limit for the multiple server curves.
© 2006, Monash University, Australia Practical Queuing Theory (2) For situations with extremely limited memory: For single server queues, a pragmatic estimate of the minimum size to for the queue pool in a computer system is: (3 to 5 times average queue size) plus 1 element Rounded up to next whole number. This gives some 93 to 98 per cent probability of the queue not exceeding the pool size. In all cases, provision must be made for queue size overflowing the pool size. A common technique is to throttle or disable the input. If the pool size is too small, then frequent overflow conditions (eg throttling) will occur. If pool is too large, then memory resources are wasted.
© 2006, Monash University, Australia Practical Queuing Theory (3) Call Centres are a case of ‘single queue with multiple servers’ Use the traditional queuing theory graphs when dealing with communications equipments or single server queues Use Erlang C formula (spreadsheet and graphs) for single queue with multiple servers Both sets of formulas can be useful in other domains - eg Bank queues, Supermarket queues, Ticket sales queues etc Printing queues ISP customer port estimates
© 2006, Monash University, Australia Practical Queuing Theory (4) Use these techniques in ‘trade off’ studies. There are some 30 graphs available in the James Martin texts, which cover many other aspects, such as probability of queues exceeding certain sizes. Remember that for all practical purposes, doubling the line speed of a busy line (eg 60% utilisation) reduces delays to about 1/3rd of the original, and costs about 30% more However, doubling the load on a busy line (anything above 35% to 40% utilisation) will probably cause system failure due to overload.
© 2006, Monash University, Australia Queues and End User Psychology Users expect perceived small tasks to be performed more quickly than perceived larger tasks Non linear response times cause problems, eg doubling the load of a lightly loaded line usually trebles the response time. Variability of response times upsets people Consistent or constant times (even if somewhat poor) are more acceptable than wide variations which are sometimes excellent and sometimes very poor. Bad response times or unusual delays frustrate users, who often aggravate the problem by making multiple requests for service, or by trying other commands etc. Important to consider this when sizing a system.
© 2006, Monash University, Australia Circuit Switched/ Queue Sizing Summary Mathematical work behind the graphs and spreadsheets is well proven for the defined situations (eg random traffic etc). Review this section, using the tools and toys available from the Internet and the subject web pages to gain an intuitive feeling for network behaviour under load. This is important in understanding user’s reaction to a network.
© 2006, Monash University, Australia Circuit Switched/ Queue Sizing Summary The strict mathematical approach is not required in practice - using the graphs is accurate enough because: 1. In practice, the traffic characteristics and loads that are given as the basis for design work are often little better than guesswork. The actual load is often 50% to 100% more than the customer indicated as the design load 2. Traffic loads change (inevitably increase) faster than anticipated. 3. Simulation approaches are best for complex systems - and the price of simulation software is decreasing steadily.
© 2006, Monash University, Australia Tutorial Q&A, Examples. Other design issues – example air conditioning capacity constraints.
© 2006, Monash University, Australia Gamma Tables Class Exercise: A cluster controller has 10 terminals attached. It is connected to its host by a line with an effective line speed of 2000bps. Each terminal inputs one message per minute. Messages have average length 1000 bytes. Question 1 Estimate: The average transmission time for each message (ie line time only) The average size of the queue The average time spent waiting for the message (waiting plus line time) The standard deviation of item c above The 95% cut-off time (time within which 95% of the traffic will be handled). Question 2 Rework question 1 with: a. Line speeds doubled b Same line speeds but message sizes halved c Doubled line speed and half message size. Present your answers as a table shown in the next OHP
© 2006, Monash University, Australia Gnuplot Gamma Function http://en.wikipedia.org/wiki/Image:Gamma_distribution_cdf.pnghttp://en.wikipedia.org/wiki/Image:Gamma_distribution_cdf.png - GPL
© 2006, Monash University, Australia Cluster Controller Exercise Summary Question 1 2000 bps, Msg 1000 Bytes Question 2a 4000 bps, Msg 1000 Bytes Question 2b 2000 bps, Msg 500 Bytes Question 2c 4000 bps, Msg 500 Bytes Part a Transmit Time (Seconds) Part b Avg Q Size (Messages) Part c Avg Wait (Seconds) Part d Std Deviation (Seconds) Part e 95% Cutoff (Seconds) R Value Used What do we learn from this ?
1 Copyright Ken Fletcher 2004 Australian Computer Security Pty Ltd Printed 30-Nov-15 18:55 Prepared for: Monash University Subj: CSE4884 Network Design.
1 Copyright Ken Fletcher 2004 Australian Computer Security Pty Ltd Printed 26-May-16 07:39 Prepared for: Monash University Subj: CSE4884 Network Design.
7/3/2015© 2007 Raymond P. Jefferis III1 Queuing Systems.
© 2006, Monash University, Australia CSE4884 Network Design and Management Lecturer: Dr Carlo Kopp, MIEEE, MAIAA, PEng Lecture 5 Queueing Theory Concepts.
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Lecture # 03 Switching Course Instructor: Engr. Sana Ziafat.
1 Chapters 8 Overview of Queuing Analysis. Chapter 8 Overview of Queuing Analysis 2 Projected vs. Actual Response Time.
Queuing Analysis Based on noted from Appendix A of Stallings Operating System text 6/10/20151.
Unit III Bandwidth Utilization: Multiplexing and Spectrum Spreading In practical life the bandwidth available of links is limited. The proper utilization.
1 Voice Traffic Engineering & Management. 2 PSTN and PBX networks are designed with 2 objectives: Maximize usage of their circuits Maximize usage of their.
Inside a Router ECS 152A Acknowledgement: slides from S. Kalyanaraman & B.Sikdar.
Switching Techniques Student: Blidaru Catalina Elena.
NETE4631:Capacity Planning (2)- Lecture 10 Suronapee Phoomvuthisarn, Ph.D. /
Computer Networks with Internet Technology William Stallings Part One Fundamentals Packet Switching vs. Circuit Switching (1.1) OSI 7 Layer Model (2.2,
Chapter 2 The Infrastructure. Copyright © 2003, Addison Wesley Understand the structure & elements As a business student, it is important that you understand.
1 Data Communications and Networking Chapter 10 Circuit Switching and Packet Switching References: Book Chapter 10.1, 10.2, 10.5 Data and Computer Communications,
1 Queuing Delay and Queuing Analysis. RECALL: Delays in Packet Switched (e.g. IP) Networks End-to-end delay (simplified) = End-to-end delay (simplified)
Computer Communication & Networks Lecture # 03 Circuit Switching, Packet Switching Nadeem Majeed Choudhary
Lecture 3 A round up of the most important basics I haven’t covered yet. A round up of some of the (many) things I am missing out of this course (ATM,
Queues / Lines You manage a call center which can answer an average of 20 calls an hour. Your call center gets 17.5 calls in an average hour. On average.
Channel Allocation Protocols. Dynamic Channel Allocation Parameters Station Model. –N independent stations, each acting as a Poisson Process for the purpose.
Switching. Circuit switching Message switching Packet Switching – Datagrams – Virtual circuit – source routing Cell Switching – Cells, – Segmentation.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 17 Queueing Theory.
Chapter 13 Queueing Models Managerial Problem Solving Techniques – CBTL212.
Chapter 15 - Queuing Analysis 1 Chapter 13 Queuing Analysis Introduction to Management Science 8th Edition by Bernard W. Taylor III.
Data Communications and Networks Chapter 2 - Network Technologies - Circuit and Packet Switching Data Communications and Network.
1 MARKETING RESEARCH Week 5 Session A IBMS Term 2,
Lecture 3 Applications of TDM ( T & E Lines ) & Statistical TDM.
Data Communication and Networks Lecture 13 Performance December 9, 2004 Joseph Conron Computer Science Department New York University
Queueing Theory. Airline Industry (routing and flight plans, crew scheduling, revenue management) Telecommunications (network routing, queue control)
Chapter 4. Understand network connectivity. Peer-to-Peer network & Client-Server network Understand network topology Star, Bus & Ring topology.
Computer Networks. A computer network is defined as the interconnection of 2 or more independent computers or/and peripherals. Computer Network.
Switching breaks up large collision domains into smaller ones Collision domain is a network segment with two or more devices sharing the same Introduction.
NETWORKING HARDWARE. What is a Network? Network: A group of two or more computer systems that are linked together and share information.
Internet Applications: Performance Metrics and performance-related concepts E0397 – Lecture 2 10/8/2010.
1 Validation and Verification of Simulation Models.
Lecture 9 Chap 9-1 Chapter 2b Fundamentals of Hypothesis Testing: One-Sample Tests.
What's inside a router? We have yet to consider the switching function of a router - the actual transfer of datagrams from a router's incoming links to.
#11 QUEUEING THEORY Systems Fall 2000 Instructor: Peter M. Hahn
Rensselaer Polytechnic Institute © Shivkumar Kalvanaraman & © Biplab Sikdar1 ECSE-4730: Computer Communication Networks (CCN) Network Layer Performance.
1 Dr. Ali Amiri TCOM 5143 Lecture 8 Capacity Assignment in Centralized Networks.
Data and Computer Communications Circuit Switching and Packet Switching.
6-1 Comparing modem and other technologies. 6-2 Internetwork Processors Switch – makes connections between telecommunications circuits in a network Router.
جلسه دهم شبکه های کامپیوتری به نــــــــــــام خدا.
Data Communication Networks Lec 13 and 14. Network Core- Packet Switching.
Queueing Theory What is a queue? Examples of queues: Grocery store checkout Fast food (McDonalds – vs- Wendy’s) Hospital Emergency rooms Machines waiting.
Copyright ©2011 Pearson Education 8-1 Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft Excel 6 th Global Edition.
Data and Computer Communications 8 th and 9 th Edition by William Stallings Chapter 10 – Circuit Switching and Packet Switching.
8-1 Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft.
Lecture 2 Process Concepts, Performance Measures and Evaluation Techniques.
© 2017 SlidePlayer.com Inc. All rights reserved.