Presentation is loading. Please wait.

Presentation is loading. Please wait.

ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x.

Similar presentations


Presentation on theme: "ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x."— Presentation transcript:

1 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x

2 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones uThe SC Network uWorking with S2io Cisco & folks uAt the SLAC Booth Running the BW Challenge Bandwidth Lust at SC2003

3 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones The Bandwidth Challenge at SC2003 uThe peak aggregate bandwidth from the 3 booths was 23.21Gbits/s u1-way link utilisations of >90% u6.6 TBytes in 48 minutes

4 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Multi-Gigabit flows at SC2003 BW Challenge u Three Server systems with 10 Gigabit Ethernet NICs u Used the DataTAG altAIMD stack 9000 byte MTU u Send mem-mem iperf TCP streams From SLAC/FNAL booth in Phoenix to: Pal Alto PAIX rtt 17 ms, window 30 MB Shared with Caltech booth 4.37 Gbit HighSpeed TCP I=5% Then 2.87 Gbit I=16% Fall when 10 Gbit on link 3.3Gbit Scalable TCP I=8% Tested 2 flows sum 1.9Gbit I=39% Chicago Starlight rtt 65 ms, window 60 MB Phoenix CPU 2.2 GHz 3.1 Gbit HighSpeed TCP I=1.6% Amsterdam SARA rtt 175 ms, window 200 MB Phoenix CPU 2.2 GHz 4.35 Gbit HighSpeed TCP I=6.9% Very Stable Both used Abilene to Chicago

5 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones

6 UKLight at SC2004 uUK e-Science Researchers from Manchester, UCL & ULCC involved in the Bandwidth Challenge uCollaborated with Scientists & Engineers from Caltech, CERN, FERMI, SLAC, Starlight, UKERNA & U. of Florida uWorked on: 10 Gbit Ethernet link from SC2004 to ESnet/QWest PoP in Sunnyvale 10 Gbit Ethernet link from SC2004 and the CENIC/NLR/Level(3) PoP in Sunnyvale 10 Gbit Ethernet link from SC2004 to Chicago and on to UKLight uUKLight focused on disk-to-disk transfers between UK sites and Pittsburgh uUK had generous support from Boston Ltd who loaned the servers uThe BWC Collaboration had support from: S2io NICs Chelsio TOE Sun who loaned servers uEssential support from Cisco

7 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones uSCINet Collaboration at SC2004 uSetting up the BW Bunker uThe BW Challenge at the SLAC Booth uWorking with S2io, Sun, Chelsio

8 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones The Bandwidth Challenge – SC2004 uThe peak aggregate bandwidth from the booths was 101.13Gbits/s uOr 3 full length DVD per second uSaturated TEN 10GE waves uSLAC Booth: Sunnyvale to Pittsburgh, LA to Pittsburgh and Chicago to Pittsburgh (with UKLight).

9 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones SC2004 UKLIGHT – Focused on Disk-to-Disk MB-NG 7600 OSR Manchester ULCC UKlight UCL HEP UCL network K2 Ci Chicago Starlight Amsterdam SC2004 Caltech Booth UltraLight IP SLAC Booth Cisco 6509 UKlight 10G Four 1GE channels UKlight 10G Surfnet/ EuroLink 10G Two 1GE channels NLR Lambda NLR-PITT-STAR-10GE-16 K2 Ci CERN 7600

10 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Transatlantic Ethernet: TCP Throughput Tests uSupermicro X5DPE-G2 PCs uDual 2.9 GHz Xenon CPU FSB 533 MHz u1500 byte MTU u2.6.6 Linux Kernel uMemory-memory TCP throughput uStandard TCP uWire rate throughput of 940 Mbit/s uFirst 10 sec uWork in progress to study: Implementation detail Advanced stacks Packet loss Sharing

11 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Transatlantic Ethernet: disk-to-disk Tests uSupermicro X5DPE-G2 PCs uDual 2.9 GHz Xenon CPU FSB 533 MHz u1500 byte MTU u2.6.6 Linux Kernel uRAID0 (6 SATA disks) uBbftp (disk-disk) throughput uStandard TCP uThroughput of 436 Mbit/s uFirst 10 sec uWork in progress to study: Throughput limitations Help real users

12 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones 10 Gigabit Ethernet: UDP Throughput Tests u1500 byte MTU gives ~ 2 Gbit/s uUsed 16144 byte MTU max user length 16080 uDataTAG Supermicro PCs uDual 2.2 GHz Xenon CPU FSB 400 MHz uPCI-X mmrbc 512 bytes uwire rate throughput of 2.9 Gbit/s uCERN OpenLab HP Itanium PCs uDual 1.0 GHz 64 bit Itanium CPU FSB 400 MHz uPCI-X mmrbc 512 bytes uwire rate of 5.7 Gbit/s uSLAC Dell PCs giving a uDual 3.0 GHz Xenon CPU FSB 533 MHz uPCI-X mmrbc 4096 bytes uwire rate of 5.4 Gbit/s

13 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones 10 Gigabit Ethernet: Tuning PCI-X u16080 byte packets every 200 µs uIntel PRO/10GbE LR Adapter uPCI-X bus occupancy vs mmrbc Measured times Times based on PCI-X times from the logic analyser Expected throughput ~7 Gbit/s Measured 5.7 Gbit/s mmrbc 1024 bytes mmrbc 2048 bytes mmrbc 4096 bytes 5.7Gbit/s mmrbc 512 bytes CSR Access PCI-X Sequence Data Transfer Interrupt & CSR Update

14 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones 10 Gigabit Ethernet: SC2004 TCP Tests uSun AMD opteron compute servers v20z uChelsio TOE Tests between Linux 2.6.6. hosts 10 Gbit ethernet link from SC2004 to CENIC/NLR/Level(3) PoP in Sunnyvale Two 2.4GHz AMD 64 bit Opteron processors with 4GB of RAM at SC2004 1500B MTU, all Linux 2.6.6 in one direction 9.43G i.e. 9.07G goodput and the reverse direction 5.65G i.e. 5.44G goodput Total of 15+G on wire. 10 Gbit ethernet link from SC2004 to ESnet/QWest PoP in Sunnyvale One 2.4GHz AMD 64 bit Opteron each end 2MByte window, 16 streams, 1500B MTU, all Linux 2.6.6 in one direction 7.72Gbit/s i.e. 7.42 Gbit/s goodput 120mins (6.6Tbits shipped) uS2io NICs with Solaris 10 in 4*2.2GHz Opteron cpu v40z to one or more S2io or Chelsio NICs with Linux 2.6.5 or 2.6.6 in 2*2.4GHz V20Zs LAN 1 S2io NIC back to back: 7.46 Gbit/s LAN 2 S2io in V40z to 2 V20z : each NIC ~6 Gbit/s total 12.08 Gbit/s

15 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones UKLight and ESLEA uCollaboration forming for SC2005 Caltech, CERN, FERMI, SLAC, Starlight, UKLight, … uCurrent Proposals include: Bandwidth Challenge with even faster disk-to-disk transfers between UK sites and SC2005 Radio Astronomy demo at 512 Mbit user data or 1 Gbit user data Japan, Haystack(MIT), Jodrell Bank, JIVE High Bandwidth linkup between UK and US HPC systems 10Gig NLR wave to Seattle uSet up a 10 Gigabit Ethernet Test Bench Experiments (CALICE) need to investigate >25 Gbit to the processor uESLEA/UKlight need resources to study: New protocols and congestion / sharing The interaction between protcol processing, applications and storage Monitoring L1/L2 behaviour in hybrid networks

16 ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones


Download ppt "ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x."

Similar presentations


Ads by Google