APAN 10Gbps End-to-End Performance Measurement Masaki Hirabaru (NICT), Takatoshi Ikeda (KDDI/NICT), and Yasuichi Kitamura (NICT) July 19, 2006 Network Engineering Workshop APAN 2006 Singapore
10G Capable Machines Ready in Japan NoHostH/W10G NICBusOSTCP Stack Iperf TCP 1nms5RAM3100 -CPU Opteron GHz x2 -Memory PC3200 LC2 512ME x2 Chelsio T210 (TOE) PCI-X1.0a (64bit, 133MHz) Red Hat AS 4 up 3 (Kernel ) Chelsio TOE 7.5G 2nms6 3nms7IBM x366 -CPU Xeon MP 3.66GHz x2 -Memory ECC RDIMM Chipkill 8GB Netrion XframeII PCI-X 2.0 (64bit, 266MHz) Red Hat AS 4 up 3 (Kernel ) New Reno +Bic 8.6G 4nms8 5x366aIBM x366 -CPU Xeon MP 3.66GHz x2 -Memory ECC RDIMM Chipkill 2GB Neterion XframeII PCI-X 2.0 (64bit, 266MHz) Red Hat WS 4 (Kernel ) New Reno +Bic 5.0G 6x366b 7genkai- ia64 HP RX2620 -CPU Itanium2 1.6GHz x1 -Memory 1G Chelsio N210 PCI-X1.0a (64bit, 133MHz) Red Hat AS 3 up4 (Kernel ) New Reno 6.8G 8ia64a 9optaHP DL145 G2 -CPU Opteron GHz x2 - Memory 1GB Myricom 10G-PCIE- 8A-R PCI-E x8 (250MB/s x8) Fedora Core 5 (Kernel ) New Reno +Bic 7.2G 10optb -CPU Opteron GHz x1 - Memory 2GB
JGN2 Network Configuration for 10G Experiment genkai-ia64 sumit400 (summit400) MS7 (AX7808S) tpr5tpr4 nms5 nms8 nms6 nms7 NICT (GS4k) NICT (GS4k) x366b NICT (MG8) x366aia64 opta optb kfok (GS4k) oym (GS4k) osa (GS4k) note (GS4k) note (GS4k) GenkaiXP Koganei TokyoXP 10G-LW line L2 path L3 device(Router) L2 device(Switch) Server Neterion Xframe-II Chelsio N210 Chelsio T210 Mericom 10GPCI-E 10G-LR line B.132 B.130 B.131 B.133 B.134 A.227 B.210 B.214 C.193 B.202 VID:24 VID:40 VID:38 VID:39 VID:1036 MS6 (BI15k) Procket Juniper T320 TransPAC2 10Gx4
TokyoXP Initial Result (under congestion) genkai-ia64 tpr4 nms7 nms8 tpr5 ia Server listening on TCP port 5001 TCP window size: 60.0 MByte (WARNING: requested 30.0 MByte) [ 4] local port 5001 connected with port [ 4] sec 587 MBytes 4.92 Gbits/sec [ 4] sec 728 MBytes 6.10 Gbits/sec [ 4] sec 721 MBytes 6.04 Gbits/sec [ 4] sec 186 MBytes 1.56 Gbits/sec [ 4] sec 74.2 MBytes 623 Mbits/sec [ 4] sec 83.7 MBytes 702 Mbits/sec [ 4] sec 90.2 MBytes 756 Mbits/sec [ 4] sec 93.1 MBytes 781 Mbits/sec [ 4] sec 93.8 MBytes 787 Mbits/sec [ 4] sec 94.9 MBytes 796 Mbits/sec Server listening on TCP port 5001 TCP window size: 20.0 MByte (WARNING: requested 10.0 MByte) [ 4] local port 5001 connected with port [ 4] sec 880 MBytes 7.38 Gbits/sec [ 4] sec 1012 MBytes 8.49 Gbits/sec [ 4] sec 1023 MBytes 8.58 Gbits/sec [ 4] sec 1.00 GBytes 8.63 Gbits/sec [ 4] sec 1.01 GBytes 8.64 Gbits/sec [ 4] sec 1.00 GBytes 8.63 Gbits/sec [ 4] sec 1.00 GBytes 8.62 Gbits/sec [ 4] sec 1.00 GBytes 8.62 Gbits/sec [ 4] sec 1.00 GBytes 8.62 Gbits/sec [ 4] sec 1.00 GBytes 8.61 Gbits/sec Congestion! 8.6Gbps 6Gbps Bandwidth is taken by new flow 0.8Gbps RTT: 20ms RTT: 0ms JGN2 GenkaiXPKoganei Total 9.4Gbps ~100M bps APII commodity traffic flows background
Experiment Plan with Large BWDP Performance measurement in JP with 10G delay box –Performance measurement by using several 10G NICs –Evaluation of the several congestion avoidance algorithms on the 10G Ethernet congested at a bottleneck. (e.g. fast, highspeed, scalable,,,) Performance measurement between JP and US –Performance measurement Between JP and US machines, if US colleague has machines. (A) –Performance measurement the path go around the Pacific Ocean and the US domestic. (B) A A B Where is a congestion point created?