Presentation is loading. Please wait.

Presentation is loading. Please wait.

Buffer Management in a Switch

Similar presentations


Presentation on theme: "Buffer Management in a Switch"— Presentation transcript:

1 Buffer Management in a Switch
Lecture 5: Buffer Management in a Switch

2 Filling and Draining Queues
Token Bucket Policy Buffer Sharing Policies Random Early Discard Draining Leaky Bucket Mechanism Round-Robin Mechanism Weighted Round-Robin Mechanism

3 Filling: Token Bucket A token bucket allows a certain smoothed average number of bytes per second to pass for a given flow but also allows limited short – term bursts of traffic. The bucket is essentially a counter. The counter increments a constant rate, up to some maximum value, at which point any attempt to increment the counter results in the counter overflowing and simply staying at its maximum. The count corresponds to the number of tokens available to pay for the transmission of data on the controlled flow.

4 Filling: Token Bucket Each token corresponds to a fixed number of bytes of transmission (e.g., 1 byte). Each time a frame comes to the token bucket policing station, the number of tokens required to pay for that frame are taken from the bucket (counter). If there are sufficient tokens, the frame is accepted normally into the queue. If there are insufficient tokens, the frame is either dropped, put into a lower - priority queue, or marked in some way that prioritizes it for possible loss later. The average rate of appearance of tokens controls the average flow rate. The limit on the bucket controls the maximal credit the flow can accumulate; This maximal credit can be used to legitimize periodic bursts of traffic.

5 Filling: Buffer-Sharing Policies
Where buffers are shared between queues, more policies need to be considered. Each queue may have some reserved number of buffers available only to it, as well as shared access to some other pool of buffers. When a new frame arrives, any such sharing scheme must be interrogated to see if a buffer is available to that queue. Additionally, it is sometimes possible for higher - priority flows to steal buffers from lower – priority flows.

6 Filling: Random Early Discard
A simple minded frame drop policy is to have queues begin to drop packets exactly when there is no more space for frames. However, where multiple TCP/IP flows compete for a common pool of buffers, all such flows experience frame drop at the same time. When TCP/IP flows experience frame drop, they (may) back off; as a result, the network is starved of traffic as all active flows go idle at once. Random early discard (RED) remedies this synchronization problem. A set of queues sharing a common buffer pool and managed by RED has zero probability of dropping a new frame when the queues are nearly empty (or the free pool is sufficiently large).

7 Filling: Random Early Discard
Once a threshold of fullness has been reached, however, RED begins to drop frames randomly. The probability of dropping rises monotonically until it becomes one when the queues are full. The advantage of RED is that it avoids synchronization of frame drop backoffs and thereby keeps the link full of traffic. Some flows have suffered frame drop prematurely, but the overall use of the expensive resource (the egress link) is maintained. The increasing probability of frame drop provides a gently increasing throttling of the overall flow, right up to the point at which the queue systems must drop all new frames. RED incurs no frame drop when the loads are light and the queues are nearly empty.

8 Draining: Leaky Bucket Mechanism
Smoothes potentially bursty traffic flows, thereby brings links and queues to a more efficient average condition. The bucket holds incoming frames. Of course, the frames are actually in a queue — the leaky bucket is a control mechanism for scheduling frames from that queue for egress. The bucket can emit only a fixed rate of flow (from a hole in the bottom of the bucket). Arriving packets flow into the bucket from the top. Should the input flow exceed the allowed egress flow, the bucket will tend to fill up. Should this continue, the bucket will fill, and frames will spill over the top of the bucket and be lost (i.e., packet drops). Should the ingress flow be sufficiently low, the bucket will tend to drain.

9 Draining: Leaky Bucket Mechanism
When any accumulated frames are gone and the ingress flow is below the maximal egress flow, the actual egress flow will drop to the current ingress flow. This leaky bucket mechanism delays bursts, letting them flow downstream in the network only at some preset maximal rate. The logic of the leaky bucket implies that it is part of the scheduling of a packet from the head of a queue.

10 Draining: RoundRobin Mechanism
A method to share egress link capacity evenly among multiple queues or flows competing for service at a mux that feeds the output link. Round robin is the simplest such sharing mechanism: Each queue is permitted to emit one frame, should it have an available frame, in strict rotation with the other queues. In its pure form, roundrobin pays no attention to the length of the packets, so the vagaries of packet - length distributions may result in unfair use of link capacity, but over time such unfairness is generally assumed to even out.

11 Weighted Round - Robin Mechanism
Allows competing flows or queues to have different access to the shared egress path. WRR assumes an average packet length, then computes a normalized, weighted number of packets to be emitted by each queue in turn, based on the weight assigned to each queue.


Download ppt "Buffer Management in a Switch"

Similar presentations


Ads by Google