T startend streamingelastic"> T startend streamingelastic">

Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005.

Similar presentations


Presentation on theme: "Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005."— Presentation transcript:

1 Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005

2 Fairness and congestion control s fair sharing: an objective as old as congestion control Q cf. RFC 970, Nagle, 1985 Q non-reliance on user cooperation Q painless introduction of new transport protocols Q implicit service differentiation s fair queueing is scalable and feasible Q accounting for the stochastics of traffic Q a small number of flows to be scheduled Q independent of link speed s performance evaluation of congestion control Q must account for realistic traffic mix Q impact of buffer size, TCP version, scheduling algorithm

3 Flow level characterization of Internet traffic s traffic is composed of flows Q an instance of some application Q (same identifier, minimum packet spacing) s flows are "streaming" or "elastic" Q streaming SLS = "conserve the signal" Q elastic SLS = "transfer as fast as possible" inter-packet < Tsilence > T startend streamingelastic

4 Characteristics of flows s arrival process Q Poisson session arrivals: succession of flows and think times s size/duration Q heavy tailed, correlation flow arrivals start of session end of session think times

5 Characteristics of flows s arrival process Q Poisson session arrivals: succession of flows and think times s size/duration Q heavy tailed, correlation s flow peak rate Q streaming: rate  codec Q elastic: rate  exogenous limits (access link,...) rate duration rate duration

6 Three link operating regimes need scheduling excellent for elastic, some streaming loss need overload control low throughput, significant loss FIFO sufficient negligible loss and delay overall rate "transparent""congested""elastic" flows

7 Performance of fair sharing without rate limit (ie, all flows bottlenecked) s a fluid simulation: Q Poisson flow arrivals Q no exogenous peak rate limit  flows are all bottlenecked Q load = 0.5 (arrival rate x size / capacity) high low average rate startend 20 seconds link rate incoming flows

8 The process of flows in progress depends on link load load 0.5 high low10 0 20 30 flows in progress

9 The process of flows in progress depends on link load 10 0 20 30 flows in progress load 0.6 high low

10 The process of flows in progress depends on link load 10 0 20 30 flows in progress load 0.7 high low

11 The process of flows in progress depends on link load 10 0 20 30 flows in progress load 0.8 high low

12 The process of flows in progress depends on link load 10 0 20 30 flows in progress load 0.9 high low

13 Insensitivity of processor sharing: a miracle of queuing theory ! s link sharing  behaves like M/M/1 Q assuming only Poisson session arrivals  if flows are bottlenecked, E [flows in progress] =   i.e., average  9 for  0.9, but   as  1  but, in practice,  < 0.5 and E [flows in progress] = O(10 4 ) ! 0.2.4.6.8 load,  E [flows in progress]  0 5 10 15 20

14 Trace data s an Abilene link (Indianapolis-Clevelend) – from NLANR Q OC 48, utilization 16 % Q flow rates  (10 Kb/s, 10 Mb/s) Q ~7000 flows in progress at any time 10 sec 2.5 Gb/s >2.5 Mb/s < 250 Kb/s

15 Most flows are non-bottlenecked s each flow emits packets rarely s little queueing at low loads Q FIFO is adequate Q performance like a modulated M/G/1 s at higher loads, a mix of bottlenecked and non-bottlenecked flows... ~5 µs 2.5 Gb/s ~7000 flows ~1 ms 15 Mb/s

16 Fair queueing is scalable and feasible s fair queueing deals only with flows having packets in queue Q <100 bottlenecked flows (at load < 90%) Q O(100) packets from non-bottlenecked flows (at load < 90%) s scalable since number does not increase with link rate Q depends just on bottlenecked/non-bottlenecked mix s feasible since max number is ~500 (at load < 90%) Q demonstration by trace simulations and analysis (Sigmetrics 2005) s can use any FQ algorithm Q DRR, Self-clocked FQ,... Q or even just RR ?

17 Typical flow mix s many non-bottlenecked flows (~10 4 ) Q rate limited by access links, etc. s a small number of bottlenecked flows (0, 1, 2,...)  Pr [  i flows] ~  i with  the relative load of bottlenecked flows s example Q 50% background traffic –ie, E[flow arrival rate] x E[flow size] / capacity = 0.5 Q 0, 1, 2 or 4 bottlenecked TCP flows –eg, at overall load = 0.6, Pr [  5 flows]  0.003

18 Simulation set up (ns2) s one 50 Mbps bottleneck Q RTT = 100ms s 25 Mbps background traffic Q Poisson flows: 1 Mbps peak rate Q or Poisson packets (for simplicity) s 1, 2 or 4 permanent high rate flows Q TCP Reno or HSTCP s buffer size Q 20, 100 or 625 packets (625 = b/w x RTT) s scheduling Q FIFO, drop tail Q FQ, drop from front of longest queue

19 Results: - 1 bottlenecked flow, - Poisson flow background

20 FIFO + Reno 20 packets 625 packets 1000 1 100s cwnd (pkts) utilization 0 0

21 FIFO + Reno 1000 1 100s cwnd (pkts) utilization 20 packets 100 packets 0 0 Severe throughput loss with small buffer: - realizes only 40% of available capacity

22 FIFO + 100 packet buffer 1000 1 100s cwnd (pkts) utilization 0 0 RenoHSTCP HSTCP brings gain in utilization, higher loss for background flows

23 Reno + 20 packet buffer 1000 1 100s cwnd (pkts) utilization 0 0 FIFO FQ FQ avoids background flow loss, little impact on bottlenecked flow

24 Results: - 2 bottlenecked flows, - Poisson packets background

25 FIFO + Reno + Reno 1000 1 100s cwnd (pkts) utilization 0 0 20 packets 625 packets Approximate fairness with Reno

26 FIFO + HSTCP + HSTCP 1000 1 100s cwnd (pkts) utilization 0 0 20 packets 625 packets

27 FIFO + HSTCP + Reno 1000 1 100s cwnd (pkts) utilization 0 0 20 packets 625 packets HSTCP is very unfair

28 Reno + HSTCP + 20 packet buffer 1000 1 100s cwnd (pkts) utilization 0 0 FIFO FQ

29 Reno + HSTCP + 625 packet buffer 1000 1 100s cwnd (pkts) utilization 0 0 FIFO FQ Fair queueing is effective (though HSTCP gains more throughput)

30 Results: - 4 bottlenecked flows, - Poisson packet background

31 All Reno + 20 packet buffer 1000 1 100s cwnd (pkts) utilization 0 0 1 flow 4 flows Improved utilization with 4 bottlenecked flows, approximate fairness

32 All Reno + 625 packet buffer 1000 1 100s cwnd (pkts) utilization 0 0 1 flow 4 flows Approximate fairness

33 All HSTCP + 625 packet buffer 1000 1 100s cwnd (pkts) utilization 0 0 1 flow 4 flows Poor fairness, loss of throughput

34 All HSTCP + 625 packet buffer 1000 1 100s cwnd (pkts) utilization 0 0 FIFO FQ Fair queueing restores fairness, preserves throughput

35 Conclusions s there is a typical traffic mix Q small number of bottlenecked flows (0, 1, 2,...) Q large number of non-bottlenecked flows s fair queueing is feasible Q O(100) flows to schedule for any link rate s results for 1 bottlenecked flow + 50% background Q severe throughput loss for small buffer Q FQ avoids loss and delay for background packets s results for 2 or 4 bottlenecked flows + 50% background Q Reno approximately fair Q HSTCP very unfair, loss of utilization Q FQ ensures fairness for any transport protocol s alternative transport protocols ?


Download ppt "Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005."

Similar presentations


Ads by Google