2 ReferencesTitle: Network Processors Architectures, Protocols, and Platforms Author: Panos C. Lekkas Publisher: McGraw-HillTam-Anh Chu, “WAN Multiprotocol Traffic Management Theory & Practice,” Communications Design Conference San Jose, 9/23- 26/2002
3 Why do we need Traffic Management? Until recently traffic was treated under a best effort paradigm.Internet protocol has ended up to be the common network protocol for multi-service networks and applications.By emergence of new applications, specially those provided by new generation of wireless protocols, the situation is getting more complex in the future.These applications have different performance requirements.We have to use network resources efficiently to have a profitable network.
4 Traffic Management for Best Effort Traffic Best effort should not be interpreted as no effort !!In reality edge routers are frequently over-subscribed.In the best effort paradigm we want to treat all users equally.Connection to the core network is usually over-subscribed.Users are distributed non-uniformly across the access ports.A simple Round-Robin schedulers treats all PORTS equally.We want to treat all users equally.Situation gets more complex when:Number of active users change dynamically.Each user has different applications, requirements and service level agreements.Core NetworkDSL ModemDSLAM 1DSLAM 2DSLAM KEthernet Switch 1Ethernet Switch 30Edge RouterOC-48access port
5 Traffic Management Objective To unequally share the network resources (bandwidth and memory) between the users and applications.Traffic flows should be identified and classified in multiple queues to be able to control QoS.Network protocols and architectures such as IntServ, DiffServ and MPLS help us to provide QoS in network.QoS seeks to specify and control five fundamental network variables:Bandwidth or throughputLatencyJitterPacket lossLink availability
6 Traffic Management vs. Traffic Engineering Traffic management is performed on the data plane over the packets:Resource allocation:SchedulingShapingCongestion ControlPacket discardTraffic Engineering is performed on the control plane to set up the routes and paths:Load balancingFailure recoveryLink Utilization Control
7 Traffic Management Obstacles We have enough knowledge about algorithms and their properties:Bounds on delay and memory requirementUnresolved ChallengesIs there a systematic way to set the parameters?To some extent the answer is yes, but the theoretical bounds are very loose.What if we set the parameters wrong? Is there a systematic way to pin point the problem?As far as I know the answer is no.
8 Major Tasks and Algorithms Statistics gatheringTraffic policingTraffic ShapingSchedulingQueueing and Buffer ManagementCongestion avoidance and packet dropping
9 Statistics We need to gather statistics Number of packet arrivals for each flowNumber of discarded packets for each flowNumber of non-conforming packetsUsually they use on-chip counters to gather this informationOnly TM has information related to the network congestion levelPacket marking should be done based on the congestion level.
10 Packet MarkingIt is important to make sure that the packets are conforming to the SLA.In the DiffServ AF PHB a marking algorithm such as two-rate three-color marker (trTCM) or single-rate three-color marker (srTCM) established the packet-discarding precedence:In trTCM we have two rate and three colors for the packets:Useful when peak rate should be enforcedIn srTCM we have one rate and three colors for the packetsUseful when only burst size mattersGreen maps to AFx1, Yellow to AFx2 and Red to AFx3Source:
11 Two Rate TCM Parameters: Peak Information Rate (PIR) and Peak Burst Size (PBS)Committed Information Rate (CIR) and Committed Burst Size (CBS).PIR > CIRSource:
12 Single Rate TCM Parameters: Committed Information Rate (CIR) Committed Burst Size (CBS).Excess Burst Size (EBS)Source:
13 Traffic ShapingTraffic shaping is usually done in the egress line card to shape and smooth the outgoing traffic.Token rate regulates transfer of packetsIf sufficient tokens available, packets enter network without delayB determines how much burstiness allowed into the network
14 Congestion Management We discard packets to avoid congestion.Simple tail dropping results in TCP global synchroniztion.RED starts to randomly drop packets when buffers are more than Tmin.In WRED different queues have different buffer occupation thresholds.
15 Dropping Policy in REDFloyd, S., and Jacobson, V., Random Early Detection gateways for Congestion Avoidance V.1 N.4, August 1993, p
16 Scheduling Scheduler decides which queue to be served next? Round Robin Scheduler: Every queue is served in a round-robin fashion.Weighted Round Robin (WRR): Queue i is served Ni times in a round robin fashion.Priority Queueing: A lower priority queue is only served when there is no higher priority backlogged traffic.Weighted Fair Queuing (WFQ) provides minimum bandwidth guarantees for different queues (their fair shares)Excess bandwidth (if any) distributed equally among flowsProven to provide delay bounds for well-behaved traffic flowsDeficit Round Robin: Good approximation of the WFQ.
17 GPS and WFQ One problem with WRR is penalization of short packets. Genralized Processor Sharing (GPS) to take care of this problem.In GPS each flow i is assigned a weightThe service rate for any non-empty queue isUsing GPS we can bound delay of packets.If a flow is limited by a token bucket specification, where Bi and Ri are the bucket size and token rate and
18 GPS and WFQImplementing GPS explicitly is only possible if we can send and serve flows at the bit granularity.It is said that GPS is a fluid policy, because it needs to serve fraction of packets.WFQ is a packetized policy that tracks output of GPS.The idea is to calculate the finishing time of every packet if we were able to implement GPS.WFQ always serve the packet with smallest finishing time.WFQ has a bounded delay too:
20 Source: Patrick Maillé, “An introduction to Network Calculus”. Arrival and Service CurvesBacklog boundSource: Patrick Maillé, “An introduction to Network Calculus”.
21 DRR Each queue has a deficit counter. At the beginning of each round deficit counter of each queue is incremented by its quantum value.Quantum value determines how many bytes from that queue we want to schedule in each round.A round is one round-robin iteration over backlogged queues.In a round every queue that its packet length is less than its deficit counter.If a queue is served its deficit counter is reduced by the packet length.In each round each backlogged queues deficit is incremented by its quantum value.