Buffer Management in a Switch

Slides:



Advertisements
Similar presentations
CprE 458/558: Real-Time Systems
Advertisements

Technische universiteit eindhoven 1 Problem 16: Design-space Exploration Jeroen Voeten, Bart Theelen Eindhoven University of Technology Embedded Systems.
Computer Networking Lecture 20 – Queue Management and QoS.
1 CNPA B Nasser S. Abouzakhar Queuing Disciplines Week 8 – Lecture 2 16 th November, 2009.
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
01. Apr INF-3190: Congestion Control Congestion Control Foreleser: Carsten Griwodz
Congestion Control Tanenbaum 5.3 Tanenbaum 6.5. Congestion Control Network Layer – Congestion control point to point Transport Layer – Congestion control.
Traffic Shaping Why traffic shaping? Isochronous shaping
Quality of Service Requirements
1.  Congestion Control Congestion Control  Factors that Cause Congestion Factors that Cause Congestion  Congestion Control vs Flow Control Congestion.
Engineering Internet QoS
Jaringan Komputer Lanjut Traffic Management Aurelio Rahmadian.
CS 268: Lecture 8 Router Support for Congestion Control Ion Stoica Computer Science Division Department of Electrical Engineering and Computer Sciences.
EECB 473 Data Network Architecture and Electronics Lecture 3 Packet Processing Functions.
Priority Scheduling and Buffer Management for ATM Traffic Shaping Authors: Todd Lizambri, Fernando Duran and Shukri Wakid Present: Hongming Wu.
Differentiated Services. Service Differentiation in the Internet Different applications have varying bandwidth, delay, and reliability requirements How.
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
Lecture 5: Congestion Control l Challenge: how do we efficiently share network resources among billions of hosts? n Last time: TCP n This time: Alternative.
Pipelined Two Step Iterative Matching Algorithms for CIOQ Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York, Stony Brook.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
1 Why traffic shaping? yIn packet networks that implement resource sharing xadmission control and scheduling alone are insufficient users may attempt to.
Link Scheduling & Queuing COS 461: Computer Networks
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
1 Congestion Control Computer Networks. 2 Where are we?
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
CONGESTION CONTROL.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 16 – Multimedia Transport (Part 2) Klara Nahrstedt Spring 2011.
Queue Management Mike Freedman COS 461: Computer Networks Lectures: MW 10-10:50am in Architecture N101
An End-to-End Service Architecture r Provide assured service, premium service, and best effort service (RFC 2638) Assured service: provide reliable service.
Providing QoS in IP Networks
1 Lecture 15 Internet resource allocation and QoS Resource Reservation Protocol Integrated Services Differentiated Services.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
The Network Layer Congestion Control Algorithms & Quality-of-Service Chapter 5.
Scheduling Mechanisms Applied to Packets in a Network Flow CSC /15/03 By Chris Hare, Ricky Johnson, and Fulviu Borcan.
Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Markov Models for data flow In Computer Networks Resource: Fayez Gebali, Analysis.
2. Data Link Layer: Medium Access Control. Scheduling.
Internet Quality of Service
Congestion Control in Data Networks and Internets
Instructor Materials Chapter 6: Quality of Service
QoS & Queuing Theory CS352.
Topics discussed in this section:
The network-on-chip protocol
Congestion Control and
Buffer Management and Arbiter in a Switch
Empirically Characterizing the Buffer Behaviour of Real Devices
Queue Management Jennifer Rexford COS 461: Computer Networks
Multiple Access Mahesh Jangid Assistant Professor JVW University.
Packet-Cell Switching
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
CONGESTION CONTROL.
CONGESTION CONTROL, QUALITY OF SERVICE, & INTERNETWORKING
Queuing and Queue Management
Communication Networks NETW 501
Congestion Control in Data Networks and Internets
The University of Adelaide, School of Computer Science
Congestion Control (from Chapter 05)
COS 461: Computer Networks
Switching Techniques.
Congestion Control (from Chapter 05)
Figure Areas in an autonomous system
Congestion Control (from Chapter 05)
Chapter 2 Switching.
Congestion Control (from Chapter 05)
The Network Layer Congestion Control Algorithms & Quality-of-Service
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Switching Chapter 2 Slides Prepared By: -
Network Support for Quality of Service (QoS)
Presentation transcript:

Buffer Management in a Switch Lecture 5: Buffer Management in a Switch

Filling and Draining Queues Token Bucket Policy Buffer Sharing Policies Random Early Discard Draining Leaky Bucket Mechanism Round-Robin Mechanism Weighted Round-Robin Mechanism

Filling: Token Bucket A token bucket allows a certain smoothed average number of bytes per second to pass for a given flow but also allows limited short – term bursts of traffic. The bucket is essentially a counter. The counter increments a constant rate, up to some maximum value, at which point any attempt to increment the counter results in the counter overflowing and simply staying at its maximum. The count corresponds to the number of tokens available to pay for the transmission of data on the controlled flow.

Filling: Token Bucket Each token corresponds to a fixed number of bytes of transmission (e.g., 1 byte). Each time a frame comes to the token bucket policing station, the number of tokens required to pay for that frame are taken from the bucket (counter). If there are sufficient tokens, the frame is accepted normally into the queue. If there are insufficient tokens, the frame is either dropped, put into a lower - priority queue, or marked in some way that prioritizes it for possible loss later. The average rate of appearance of tokens controls the average flow rate. The limit on the bucket controls the maximal credit the flow can accumulate; This maximal credit can be used to legitimize periodic bursts of traffic.

Filling: Buffer-Sharing Policies Where buffers are shared between queues, more policies need to be considered. Each queue may have some reserved number of buffers available only to it, as well as shared access to some other pool of buffers. When a new frame arrives, any such sharing scheme must be interrogated to see if a buffer is available to that queue. Additionally, it is sometimes possible for higher - priority flows to steal buffers from lower – priority flows.

Filling: Random Early Discard A simple minded frame drop policy is to have queues begin to drop packets exactly when there is no more space for frames. However, where multiple TCP/IP flows compete for a common pool of buffers, all such flows experience frame drop at the same time. When TCP/IP flows experience frame drop, they (may) back off; as a result, the network is starved of traffic as all active flows go idle at once. Random early discard (RED) remedies this synchronization problem. A set of queues sharing a common buffer pool and managed by RED has zero probability of dropping a new frame when the queues are nearly empty (or the free pool is sufficiently large).

Filling: Random Early Discard Once a threshold of fullness has been reached, however, RED begins to drop frames randomly. The probability of dropping rises monotonically until it becomes one when the queues are full. The advantage of RED is that it avoids synchronization of frame drop backoffs and thereby keeps the link full of traffic. Some flows have suffered frame drop prematurely, but the overall use of the expensive resource (the egress link) is maintained. The increasing probability of frame drop provides a gently increasing throttling of the overall flow, right up to the point at which the queue systems must drop all new frames. RED incurs no frame drop when the loads are light and the queues are nearly empty.

Draining: Leaky Bucket Mechanism Smoothes potentially bursty traffic flows, thereby brings links and queues to a more efficient average condition. The bucket holds incoming frames. Of course, the frames are actually in a queue — the leaky bucket is a control mechanism for scheduling frames from that queue for egress. The bucket can emit only a fixed rate of flow (from a hole in the bottom of the bucket). Arriving packets flow into the bucket from the top. Should the input flow exceed the allowed egress flow, the bucket will tend to fill up. Should this continue, the bucket will fill, and frames will spill over the top of the bucket and be lost (i.e., packet drops). Should the ingress flow be sufficiently low, the bucket will tend to drain.

Draining: Leaky Bucket Mechanism When any accumulated frames are gone and the ingress flow is below the maximal egress flow, the actual egress flow will drop to the current ingress flow. This leaky bucket mechanism delays bursts, letting them flow downstream in the network only at some preset maximal rate. The logic of the leaky bucket implies that it is part of the scheduling of a packet from the head of a queue.

Draining: RoundRobin Mechanism A method to share egress link capacity evenly among multiple queues or flows competing for service at a mux that feeds the output link. Round robin is the simplest such sharing mechanism: Each queue is permitted to emit one frame, should it have an available frame, in strict rotation with the other queues. In its pure form, roundrobin pays no attention to the length of the packets, so the vagaries of packet - length distributions may result in unfair use of link capacity, but over time such unfairness is generally assumed to even out.

Weighted Round - Robin Mechanism Allows competing flows or queues to have different access to the shared egress path. WRR assumes an average packet length, then computes a normalized, weighted number of packets to be emitted by each queue in turn, based on the weight assigned to each queue.