Presentation is loading. Please wait.

Presentation is loading. Please wait.

Removing Exponential Backoff from TCP

Similar presentations


Presentation on theme: "Removing Exponential Backoff from TCP"— Presentation transcript:

1 Removing Exponential Backoff from TCP
Amit Mondal Aleksandar Kuzmanovic EECS Department Northwestern University

2 TCP Congestion Control
Slow-start phase Double the sending rate each round-trip ... time Reach high throughput ...quickly

3 TCP Congestion Control
Additive Increase – ...Multiplicative Decrease Fairness among flows

4 TCP Congestion Control
Exponential .backoff System stability

5 Our breakthrough Exponential backoff fundamentally wrong!
Rest of my presentation, I will argue (and hopefully convince) why so!

6 Contribution Untangle retransmit timer backoff mechanism
Challenge the need of exponential backoff in TCP Demonstrate exponential backoff can be removed from TCP without causing congestion collapse Incrementally deployable two-step task

7 Implications Dramatically improve performance of short-lived and interactive applications Increase TCP's resiliency against low-rate (shrew attack) and high-rate (bandwidth flooding) DoS attack Other impacts by avoiding unnecessary backoffs

8 Background Origin on RTO backoff
Adopted from classical Ethernet protocol IP gateway similar to 'ether' in shared-medium Ethernet network Exponential backoff is essential for Internet stability "an unstable system (a network subject to random load shocks and prone to congestion collapse) can be stabilized by adding some exponential damping (exponential timer backoff) to its primary excitation (senders, traffic sources)“ [Jacobson88]

9 Rationale behind revisions
No admission control in the Internet No bound on number of active flows Stability results in Ethernet protocol not applicable IP gateway vs classical Ethernet Classical Ethernet: Throughput reduces to zero in overloaded scenarios IP gateway: Forwards packets at full capacity even in extreme congested scenarios Dynamic network environment Finite flow sizes and skewed traffic distribution Increased bottleneck capacities

10 Implicit Packet Conservation Principle
RTO > RTT Karn-Partridge algorithm and Jacobson's algorithm ensures this End-to-end performance cannot suffer if endpoints uphold the principle Formal proof for single bottleneck case in paper Extensive evaluation with network testbed Single bottleneck Multiple bottleneck Complex topologies

11 Experimental methodology
Testbed Emulab 64-bit Intel Xeon machine + FreeBSD 6.1 RTT in 10ms - 200ms Bottleneck 10Mbps TCP Sack + RED Workload Trace-II: Synthetic HTTP traffic based on empirical distribution Trace-I : Skewed towards shorter file-size Trace-III: Skewed towards longer file-size NS2 simulations

12 Evaluation TCP*(n) : sub exponential backoff algorithms
No backoff for first “n” consecutive timeouts Impact of RTO backoff mechanism on response time Impact of minRTO and initRTO on end-to-end performance

13 Sub-exponential backoff algorithms
End-to-end performance does not degrade after removing exponential backoff from TCP Trace-I Trace-II Trace-III

14 Impact of (minRTO, initRTO) parameters
RFC 2988 recommendation (1.0s, 3.0s) Current practice (0.2s, 3.0s) Aggressive version (0.2s, 0.2s) Our key hypothesis is that setting these parameters more aggressively will not hurt the end-to-end performance as long as the endpoints uphold implicit packet conservation principle.

15 Impact of minRTO and initRTO
Poor performance of (1.0s,3.0s) RTO pair, the CCDF tail is heaviest Improved performance both for (0.2s, 3.0s) and (0.2s, 02s) pair Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(∞) The figures depict the CCDF of response time profiles for TCP, TCP*(3) and TCP*(6) for different combination of (minRTO, initRTO) parameter. First observation is the poor performance of (1.0s, 3.0s) RTO pair; the CCDF tail is heaviest. Second observation: all figures show an improved performance for (0.2s, 3.0s) and (0.2s, 0.2s) pairs over (1s, 3s) pair although the order is not uniform. For TCP case, choosing more aggressive minRTO and initRTO and leaving the backoff untouched make the aggressive choice worse. This is because packet loss prob increases due to aggressive minRTO, initRTO parameter, and it is possible for active connections to be moved into long shutoffs, and thereby reducing their performance relative to (0.2s, 3.0s) scenario. TCP*(3)

16 Role of bottleneck capacity
TCP*(∞) out performs classical TCP independent of bottleneck capacity

17 Dynamic environments ON-OFF flow arrival period
Inter-burst: 50ms – 10s

18 Dynamic environments ON-OFF flow arrival period Inter-burst: 1 sec
Time series of active connections

19 TCP variants and Queuing disciplines
TCP Tahoe, TCP Reno, TCP Sack Droptail, RED The backoff-less TCP stacks outperform regular stacks irrespective of TCP versions and queuing disciplines

20 Multiple bottlenecks Dead packets Topology
Packets that exhaust network resources upstream, but are then dropped downstream In multiple bottleneck scenario there is a chance that dead packets impact the performance of flows sharing the upstream bottleneck. We do modeling and extensive experiment to explore such scenarios R1 R2 R3 R4 S0 C0 S1 C1 S2 C2 L0 L1 L2 p1 p2 Till now, we have shown that independent of the location of bottleneck link (upstream or downstream), the backoff less TCP stack can only improve end-to-end performance, irrespective of TCP version and queuing disciplines. Now the concern is that when the bottleneck is downstream, near the receiver side, a more aggressive endpoint can generate large number of dead packets which get dropped at downstream. In multiple bottleneck scenario there is a chance that dead packets impact the performance of flows sharing the upstream bottleneck.

21 Impact on network efficiency
Fraction of dead packet at upstream bottleneck: < 5% flows experience multiple bottleneck α = for (1%, 5%) very small

22 Impact on end-to-end performance
What happens if the percent of multiple-bottleneck flows increases dramatically? What is the impact of backoff-less TCP approach on end-to-end performance in such scenarios? Emulab experiment Set L0/(L0+L1)= 0.25 >> current situation

23 Impact on end-to-end performance
Improves response times distributions of both set of flows Similar result as Trace-II Multiple-bottlenecked flows improve their response times without causing catastrophic effect other flows even when their presence is significant Trace-II Multiple-bottlenecked flows improve response times, while upstream single-bottlenecked flows only marginally degrades response times Trace-I These figures show response time distribution of both two-bottleneck flows and flows sharing only the upstream bottleneck for all three kind of Traces. The left side figure shows removing backoff altogether improves the response times of both set of flows. This is because in an environment dominated by long pauses due to exponential backoff only degrades the overall response times. With trace-II, the set of multiple-bottlenecked flows improve their aggregate response times, while flows with only single upstream-bottleneck only marginally degrades their response time. The result is similar with Trace-III. Trace-III

24 Realistic network topologies
Orbis-scaled HOT topology 10 Gbps core link 100 Mbps server edge link 1 – 10Mbps client side link 10ms link delay Workload HTTP HTTP + P2P The improvement is more significant in presence of p2p traffic Response times distribution improves in absence of p2p traffic

25 Incremental deployment
TCP's performance degrades non-negligibly when present with TCP*(∞) Two-step Task TCP to TCP*(3) TCP*(3) to TCP*(∞)

26 Summary Challenged the need of RTO backoff in TCP
End-to-end performance can only improve if endpoints uphold implicit packet conservation principle Extensive testbed evaluation for single bottleneck and multiple bottleneck scenario, and with complex topologies Incrementally deployable two-step task

27 Thank you

28 Impact of minRTO and initRTO
Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(3) TCP*(∞)

29 Impact of minRTO and initRTO
Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(3) TCP*(∞)


Download ppt "Removing Exponential Backoff from TCP"

Similar presentations


Ads by Google