Presentation is loading. Please wait.

Presentation is loading. Please wait.

Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.

Similar presentations


Presentation on theme: "Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product."— Presentation transcript:

1 Masaki Hirabaru masaki@nict.go.jp NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product Networks

2 An Example How much speed can we get? Receiver Sender a-2) High- Speed Backbone L2/L3 SW GbE 100M RTT 200ms a-1) Receiver Sender High- Speed Backbone SW GbE 100M RTT 200ms GbE SW GbE 100M

3 Average TCP Throughput less than 20Mbps In case we limit the sending rate at 100Mbps This is TCP’s fundamental behavior.

4 An Example (2) Receiver Sender High- Speed Backbone GbE RTT 200ms b) GbE Only 900 Mbps available

5 Purposes Measure, analyze and improve end-to-end performance in high bandwidth-delay product, packet-switched networks –to support for networked science applications –to help operations in finding a bottleneck –to evaluate advanced transport protocols (e.g. Tsunami, SABUL, HSTCP, FAST, XCP, [ours]) Improve TCP under easier conditions –with a signle TCP stream –memory to memory –bottleneck but no cross traffic Consume all the available bandwidth

6 TCP on a path with bottleneck bottleneck overflow queue The sender may generate burst traffic. The sender recognizes the overflow after the delay > RTT/2. The bottleneck may change over time. loss

7 Web100 (http://www.web100.org) A kernel patch for monitoring/modifying TCP metrics in Linux kernel We need to know TCP behavior to identify a problem. Iperf ( http://dast.nlanr.net/Projects/Iperf/) –TCP/UDP bandwidth measurement bwctl ( http://e2epi.internet2.edu/bwctl/) –Wrapper for iperf with authentication and scheduling tcpplot –visualizer for web100 data

8 1 st Step: Tuning a Host with UDP Remove any bottlenecks on a host –CPU, Memory, Bus, OS (driver), … Dell PowerEdge 1650 (*not enough power) –Intel Xeon 1.4GHz x1(2), Memory 1GB –Intel Pro/1000 XT onboard PCI-X (133Mhz) Dell PowerEdge 2650 –Intel Xeon 2.8GHz x1(2), Memory 1GB –Intel Pro/1000 XT PCI-X (133Mhz) Iperf UDP throughput 957 Mbps –GbE wire rate: headers: UDP(8B)+IP(20B)+EthernetII(38B) –Linux 2.4.26 (RedHat 9) with web100 –PE1650: TxIntDelay=0

9 2 nd Step: Tuning a Host with TCP Maximum socket buffer size (TCP window size) –net.core.wmem_max net.core.rmem_max (64MB) –net.ipv4.tcp_wmem net.tcp4.tcp_rmem (64MB) Driver descriptor length –e1000: TxDescriptors=1024 RxDescriptors=256 (default) Interface queue length –txqueuelen=100 (default) –net.core.netdev_max_backlog=300 (default) Interface queue descriptor –fifo (default) MTU –mtu=1500 (IP MTU) Iperf TCP throughput 941 Mbps –GbE wire rate: headers: TCP(32B)+IP(20B)+EthernetII(38B) –Linux 2.4.26 (RedHat 9) with web100 Web100 (incl. High Speed TCP) –net.ipv4.web100_no_metric_save=1 (do not store TCP metrics in the route cache) –net.ipv4.WAD_IFQ=1 (do not send a congestion signal on buffer full) –net.ipv4.web100_rbufmode=0 net.ipv4.web100_sbufmode=0 (disable auto tuning) –Net.ipv4.WAD_FloydAIMD=1 (HighSpeed TCP) –net.ipv4.web100_default_wscale=7 (default)

10 TransPAC/I2 Test: High Speed TCP (60 mins) From Tokyo to Indianapolis

11 Test in a Laboratory – with Bottleneck Network Emulator ReceiverSender L2SW (FES12GCF) Bandwidth 800Mbps Delay 88 ms Loss 0 GbE/SX GbE/T PE 2650PE 1650 2*BDP = 16MB BGP: Bandwidth Delay Product

12 Laboratory Tests: 800Mbps Bottleneck TCP NewReno (Linux) HighSpeed TCP (Web100)

13 BIC TCP buffer size 100 packets buffer size 1000 packets

14 FAST TCP buffer size 100 packets buffer size 1000 packets

15 Identify the Bottleneck existing tools: pathchar, pathload, pathneck, etc. –Available bandwidth along the path –How much the bottleneck (router) buffer size? pathbuff (under development) –measuring buffer size at the bottleneck –sending a packet train then detect a loss and delay

16 A Method of Measuring Buffer Size Receiver Sender network with bottlenec k packet train T Capacity C n packets

17 Typical cases of congestion points RouterSwitch Congestion Point with small buffer (~100 packets) Router Congestion Point with large buffer (>=1000 packets) Inexpensive, but… Poor TCP performance for high BW delay path Better TCP performance for high BW delay path

18 Summary Performance measurement to get a reliable result and identify a bottleneck Bottleneck buffer size impact on the result Future Work Performance measurement platform in cooperation with applications

19 Kwangju Busan 2.5G Fukuoka Korea 2.5G SONET KOREN Taegu Daejon 10G 1G (10G) 1G Seoul XP Genkai XP Kitakyushu Kashima 1G (10G) Fukuoka Japan 250km 1,000km 2.5G TransPAC / JGN II 9,000km 4,000km Los Angeles Chicago Washington DC MIT Haystack 10G 2.4G (x2) APII/JGNII Abilene Koganei 1G(10G) Indianapolis 100km bwctl server Network Diagram for e-VLBI and test servers 10G Tokyo XP *Performance Measurement Point Directory http://e2epi.internet2.edu/pipes/pmp/pmp-dir.html perf server e-vlbi server JGNII 10G GEANT SWITCH 7,000km


Download ppt "Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product."

Similar presentations


Ads by Google