Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**

Similar presentations


Presentation on theme: "Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**"— Presentation transcript:

1 Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik** Motivation todays network traffic consists about 90% of TCP majority of data transfers is unidirectional and elastic network paths allow high throughputs and wide bandwidth end hosts can produce and consume high dataflows How to utilise all available bandwidth? extend TCP window must to >RTT*BW b implement SACK for more flexible retransmit protect SEQ number wrapping All todays commodity systems can be tuned to use all mentioned extensions. But, this tuning isnt simple, and TCP can still show very low performance. So, methods for low-level debugging, measuring and analysis are needed. Long Fat Network RN R1 RN-1 R minimal bandwidth along path BW b - round-trip delay RTT Measured Parameters The problem of bandwidth utilisation in LFN networks is always caused by a TCP stack, unable to keep sufficient sending window. The resulting window is a product of sending applications writes, senders cogestion mechanism, network path bottleneck BW b, round-trip delay RTT, additional queueing delays, advanced queueing-policies on routers, receivers ACK policy, and receiving applications reads. Path Properties bottleneck bandwidth BW b size of wire-pipe WP round-trip delay RTT available space in router queues on path QS additional queueing dealy (part of RTT) packet-loss ratio p and queue management TCP Stack Properties RCV and SND windows, RFC1323 scaling AIMD implementation, ACK delaying TX queues on interfaces (Linux specific) internal limits and known oddities Methods of Measurement offline on host system monitor inline offline on path patched application, debug-outs, debug-calls kernel accessing, WEB100, socket states packet dumper on local interface, overhead packet sniffing, another host, promiscuous mode Configuration for Measuring Because of many oddities in buffer-settings, observed in commodity system, packet analysis is always necessary. To have knowledge about injected bandwidth, RTT, and succes of segment-sending, packet dump must be provided at least on sender side. To avoid wide gaps in captured stream (sending application has machine-time priority over sniffer), mirroring on sender side was chosen. The state of the senders cogestion mechanism was monitored using WEB100 interface, to be used with offline packet analysis later. SW Long Fat Network 2x P III / 850 netserver, iperf ftp, scp server... tcpdump logs, later tcptrace, own scripts... netperf, iperf ftp, scp client... lab2.cesnet.cz lab1.cesnet.cz tcp4-ge.uninett.no OC-192, OC-48, Gigabit Ethernet NIC 12 hops path, ~ 50 ms, >600 mbps, ~ 6MB pipe local mirroring Typical TCP Behavior As shown in the graph (typical example, throughput decrease for windows bigger than pipe was common for all measurements), available bandwidth 600 mbps wasnt utilised, regardless to sufficient window size. Throughput Decrease Details The effect strength depends on transmit queue of local output NIC. But, the decrease can be observed for correct size of the queue too, and also for highly overestimated queues even. Its caused by the AIMD mechanism on sender side, which tends to fix very low sstresh. For smaller receive windows, AIMD is clamped by standart flow-control. Thus, all queued cogestions are limited to the difference between advertised window and wire-pipe size. Packet Analysis Details Mentioned effect was studied later, using packet dumps. Connection was reconstructed by specialised scripts and tools, proving the former explanation of throughput decrease. Cogestion Avoidance This should be the expected behavior of all stable TCP connections. Rate ~ 0.75 * BW b. Aggressive Cogestion This is the cause of decrease. Initial aggressive burst starts huge cogestion, fixing low sstresh by packet drops. Fixed Small Window Details Detailed case of previous effect, showing the unused advertised window, due to slow CA and low sstresh. Its an oscillation of the AIMD cogestion mechanism. Theory in Background Stable state of the TCP connection can be described as a serie of CA sawtooth cycles. For a packet-loss rate p, maximum throughput will be clamped by AIMD to BW <= MSS * sqrt(1.5 / p) / RTT For cogestion-limited throughput, utilised bandwidth is due to CA cycles BW <= 0.75 * BW b But, for flow-control limited connection, we can achieve BW <= BW b Notice that the formulas dont consider multiple AIMD decrease flaws. Current Research and Proposed Solutions The TCPs AIMD behavior in LFN environment is one of the most important aims in todays network research. There are many solutions, how to increase CAs cycle efficiency, such as HSTCP modification, or Tom Kellys stack. Such solutions perform more agressive slowstart, and/or faster CA sawtooth slope. Result is the bigger average window for the same loss-rate of the physical link. But, our problem is different, because packet-drops are caused by cogestion, not by physical errors on the link. The most important cause of low data throughput effect is a huge cogestion, leading to bursty packet-drops later. Without this cogestion, we can achieve 0.75 * BW b rates with standart TCP stack. Thus, mentioned TCP stacks with modified AIMD, designed for high-speed networks, wont solve this problem. There are some tries, such as ETCP, solving AIMDs impact on cogestion. Unfortunately, the solution relies on the Explicit Cogestion Notification mechanism which must be implemented on all routers along the path. Conclusion and Further Research TCP stacks, implemented in all todays commodity systems, are ready for LFN networks. There are many oddities (described in TCP Performance Tuning Guide, related to this project), which can influence the accuracy of tuning, but can be avoided to have minimal performance impact. Unfortunately, AIMD mechanism cant cope with LFN networks well. In the case, when the physical loss-rate limits the transfer, there are new TCP stacks, such as HSTCP or Tom Kellys stack. But, in the case of packet-loss bursts, restarting the AIMD more than once, we must avoid huge cogestion. There are only few solutions, relying on not very popular ECN mechanism. Aims of our Further Research The detailed background of packet-loss bursts must be explored. After that, we will try to develop cogestion-predictive AIMD clamping, solving low performance in LFN, without ECN. *) CTU, Faculty of Electrical Engineering, Dept. of Computer Science and Engineering, Karlovo náměstí 13, Praha 2 **) CESNET z. s. p. o. o. Zikova 4, Praha 6


Download ppt "Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**"

Similar presentations


Ads by Google