ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 1 Multi-Gigabit Trials on GEANT Collaboration with Dante. Richard Hughes-Jones The.

Slides:



Advertisements
Similar presentations
MB - NG MB-NG Technical Meeting 03 May 02 R. Hughes-Jones Manchester 1 Task2 Traffic Generation and Measurement Definitions Pass-1.
Advertisements

DataTAG CERN Oct 2002 R. Hughes-Jones Manchester Initial Performance Measurements With DataTAG PCs Gigabit Ethernet NICs (Work in progress Oct 02)
CALICE, Mar 2007, R. Hughes-Jones Manchester 1 Protocols Working with 10 Gigabit Ethernet Richard Hughes-Jones The University of Manchester
JIVE VLBI Network Meeting 15 Jan 2003 R. Hughes-Jones Manchester The EVN-NREN Project Richard Hughes-Jones The University of Manchester.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 End-2-End Network Monitoring What do we do ? What do we use it for? Richard Hughes-Jones Many people.
Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester 1 e-Science work ESLEA & EXPReS vlbi_udp Multiple Flow Tests DCCP Tests EXPReS-Dante Collaboration.
ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 1 Protocols Working with 10 Gigabit Ethernet Richard Hughes-Jones The University.
Meeting on ATLAS Remote Farms. Copenhagen 11 May 2004 R. Hughes-Jones Manchester Networking for ATLAS Remote Farms Richard Hughes-Jones The University.
Slide: 1 Richard Hughes-Jones T2UK, October 06 R. Hughes-Jones Manchester 1 Update on Remote Real-Time Computing Farms For ATLAS Trigger DAQ. Richard Hughes-Jones.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
CdL was here DataTAG/WP7 Amsterdam June 2002 R. Hughes-Jones Manchester 1 EU DataGrid - Network Monitoring Richard Hughes-Jones, University of Manchester.
ESLEA Technical Collaboration Meeting, Jun 2006, R. Hughes-Jones Manchester 1 Protocols Recent and Current Work. Richard Hughes-Jones The University.
PFLDnet, Nara, Japan 2-3 Feb 2006, R. Hughes-Jones Manchester 1 Transport Benchmarking Panel Discussion Richard Hughes-Jones The University of Manchester.
5 Annual e-VLBI Workshop, September 2006, Haystack Observatory R. Hughes-Jones Manchester 1 The Network Transport layer and the Application or TCP/IP.
Slide: 1 Richard Hughes-Jones PFLDnet2005 Lyon Feb 05 R. Hughes-Jones Manchester 1 Investigating the interaction between high-performance network and disk.
DataGrid WP7 Meeting CERN April 2002 R. Hughes-Jones Manchester Some Measurements on the SuperJANET 4 Production Network (UK Work in progress)
JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester Brief Report on Tests Related to the e-VLBI Project Richard Hughes-Jones The University.
T2UK RAL 15 Mar 2006, R. Hughes-Jones Manchester 1 ATLAS Networking & T2UK Richard Hughes-Jones The University of Manchester then.
CALICE UCL, 20 Feb 2006, R. Hughes-Jones Manchester 1 10 Gigabit Ethernet Test Lab PCI-X Motherboards Related work & Initial tests Richard Hughes-Jones.
GEANT2 Network Performance Workshop, Jan 200, R. Hughes-Jones Manchester 1 TCP/IP Masterclass or So TCP works … but still the users ask: Where is.
Networkshop Apr 2006, R. Hughes-Jones Manchester 1 Bandwidth Challenges or "How fast can we really drive a Network?" Richard Hughes-Jones The University.
DataTAG Meeting CERN 7-8 May 03 R. Hughes-Jones Manchester 1 High Throughput: Progress and Current Results Lots of people helped: MB-NG team at UCL MB-NG.
PFLDNet Argonne Feb 2004 R. Hughes-Jones Manchester 1 UDP Performance and PCI-X Activity of the Intel 10 Gigabit Ethernet Adapter on: HP rx2600 Dual Itanium.
© 2006 Open Grid Forum Interactions Between Networks, Protocols & Applications HPCN-RG Richard Hughes-Jones OGF20, Manchester, May 2007,
Slide: 1 Richard Hughes-Jones CHEP2004 Interlaken Sep 04 R. Hughes-Jones Manchester 1 Bringing High-Performance Networking to HEP users Richard Hughes-Jones.
ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x.
CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 1 European Topology: NRNs & Geant SuperJANET4 CERN UvA Manc SURFnet RAL.
MB - NG MB-NG Meeting UCL 17 Jan 02 R. Hughes-Jones Manchester 1 Discussion of Methodology for MPLS QoS & High Performance High throughput Investigations.
02 nd April 03Networkshop Managed Bandwidth Next Generation F. Saka UCL NETSYS (NETwork SYStems centre of excellence)
GGF4 Toronto Feb 2002 R. Hughes-Jones Manchester Initial Performance Measurements Gigabit Ethernet NICs 64 bit PCI Motherboards (Work in progress Mar 02)
13th-14th July 2004 University College London End-user systems: NICs, MotherBoards, TCP Stacks & Applications Richard Hughes-Jones.
Connect. Communicate. Collaborate Alcatel’s GÉANT2 Partnership The Convergent Solution in Practice Stefano Lorenzi Senior Vice President, Alcatel Optical.
Sven Ubik, Petr Žejdl CESNET TNC2008, Brugges, 19 May 2008 Passive monitoring of 10 Gb/s lines with PC hardware.
TNC 2007 Bandwidth-on-demand to reach the optimal throughput of media Brecht Vermeulen Stijn Eeckhaut, Stijn De Smet, Bruno Volckaert, Joachim Vermeir,
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
EVN-NREN Meeting, Zaandan, 31 Oct 2006, R. Hughes-Jones Manchester 1 FABRIC 4 Gigabit Work & VLBI-UDP Performance and Stability. Richard Hughes-Jones The.
Summer School, Brasov, Romania, July 2005, R. Hughes-Jones Manchester 1 TCP/IP and Other Transports for High Bandwidth Applications TCP/IP on High Performance.
Slide: 1 Richard Hughes-Jones e-VLBI Network Meeting 28 Jan 2005 R. Hughes-Jones Manchester 1 TCP/IP Overview & Performance Richard Hughes-Jones The University.
FABRIC Meeting, Poznan Poland, 25 Sep 2006, R. Hughes-Jones Manchester 1 Broadband Protocols WP IP protocols, Lambda switching, multicasting Richard.
ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester 1 Protocols Progress with Current Work. Richard Hughes-Jones The University of Manchester.
Remote Direct Memory Access (RDMA) over IP PFLDNet 2003, Geneva Stephen Bailey, Sandburst Corp., Allyn Romanow, Cisco Systems,
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
ESLEA VLBI Bits&Bytes Workshop, 4-5 May 2006, R. Hughes-Jones Manchester 1 VLBI Data Transfer Tests Recent and Current Work. Richard Hughes-Jones The University.
Summer School, Brasov, Romania, July 2005, R. Hughes-Jones Manchester1 TCP/IP and Other Transports for High Bandwidth Applications TCP/IP on High Performance.
MB - NG MB-NG Meeting Dec 2001 R. Hughes-Jones Manchester MB – NG SuperJANET4 Development Network SuperJANET4 Production Network Leeds RAL / UKERNA RAL.
Slide: 1 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 1 Investigating the Network Performance of Remote Real-Time.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
Geneva – Kraków network measurements for the ATLAS Real-Time Remote Computing Farm Studies R. Hughes-Jones (Univ. of Manchester), K. Korcyl (IFJ-PAN),
CAIDA Bandwidth Estimation Meeting San Diego June 2002 R. Hughes-Jones Manchester UDPmon and TCPstream Tools to understand Network Performance Richard.
PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Some Performance Measurements Gigabit Ethernet NICs & Server Quality Motherboards Richard Hughes-Jones.
Collaboration Meeting, 4 Jul 2006, R. Hughes-Jones Manchester 1 Collaborations in Networking and Protocols HEP and Radio Astronomy Richard Hughes-Jones.
Networkshop March 2005 Richard Hughes-Jones Manchester Bandwidth Challenge, Land Speed Record, TCP/IP and You.
Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester 1 ATLAS TDAQ Networking, Remote Compute Farms & Evaluating SFOs Richard Hughes-Jones The.
MB - NG MB-NG Meeting UCL 17 Jan 02 R. Hughes-Jones Manchester 1 Discussion of Methodology for MPLS QoS & High Performance High throughput Investigations.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
ESLEA VLBI Bits&Bytes Workshop, 31 Aug 2006, R. Hughes-Jones Manchester 1 vlbi_udp Throughput Performance and Stability. Richard Hughes-Jones The University.
L1/HLT trigger farm Bologna setup 0 By Gianluca Peco INFN Bologna Genève,
FABRIC WP1.2 Broadband Data Path: Protocols and Processor Interface Bonn 20/09/07 Ralph Spencer The University of Manchester.
EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester 1 The Performance of High Throughput Data Flows for e-VLBI in Europe Multi-Gigabit over.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Connect. Communicate. Collaborate 4 Gigabit Onsala - Jodrell Lightpath for e-VLBI Richard Hughes-Jones.
DataGrid WP7 Meeting Jan 2002 R. Hughes-Jones Manchester Initial Performance Measurements Gigabit Ethernet NICs 64 bit PCI Motherboards (Work in progress)
MB MPLS MPLS Technical Meeting Sep 2001 R. Hughes-Jones Manchester SuperJANET Development Network Testbed – Cisco GSR SuperJANET4 C-PoP – Cisco GSR.
LHCOPN lambda and fibre routing Episode 4 (the path to resilience…)
CALICE TDAQ Application Network Protocols 10 Gigabit Lab
R. Hughes-Jones Manchester
Networking between China and Europe
MB-NG Review High Performance Network Demonstration 21 April 2004
MB – NG SuperJANET4 Development Network
Presentation transcript:

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 1 Multi-Gigabit Trials on GEANT Collaboration with Dante. Richard Hughes-Jones The University of Manchester then “Talks”

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 2 Outline uWhat is GÉANT2 u What is it interesting? 10 Gigabit Ethernet UDP memory-2-memory flows uOptions using GÉANT Development Network 10 Gbit SDH Network uOptions Using the GÉANT LightPath Service PoP Location for Network tests uPCs and Current 10 Gbit Tests PC Servers Some Test results

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 3 GÉANT2 Topology

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 4 GÉANT2: The Convergence Solution NREN Access Existing IP Router Existing IP Router GÉANT2 POP B GÉANT2 POP A Managed Lambda’s 1626 LM L2 Matrix L2 TDM Matrix TDM 1678 MCC Dark Fiber EXPReS PC 10 GE EXPReS PC 10 GE

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 5 From PoS to Ethernet Connect. Communicate. Collaborate More Economical Architecture Highest Overall Network Availability Flexibility (VLAN management) Highest Network Performance (Latency) Router IP Links 1/10 Gigabit Ethernet VC-4-nv Channels L2 Matrix TDM Matrix 1678 MCC Transport Node VLANs

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 6 What do we want to do? uSet up 4 Gigabit Lightpath Between GÉANT PoPs Collaboration with Dante PCs in their PoPs with 10 Gigabit NICs uVLBI Tests: UDP Performance Throughput, jitter, packet loss, 1-way delay, stability Continuous (days) Data Flows – VLBI_UDP and multi-Gigabit TCP performance with current kernels Experience for FPGA Ethernet packet systems uDante Interests: multi-Gigabit TCP performance The effect of (Alcatel) buffer size on bursty TCP when using BW limited Lightpaths uNeed A Collaboration Agreement

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 7 Options Using the GÉANT Development Network u10 Gigabit SDH backbone uAlkatel 1678 MCC uNode location: London Amsterdam Paris Prague Frankfurt uCan do traffic routing so make long rtt paths uAvailable Dec/Jan 07 uLess Pressure for long term tests

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 8 Options Using the GÉANT LightPaths uSet up 4 Gigabit Lightpath Between GÉANT PoPs Collaboration with Dante PCs in Dante PoPs u10 Gigabit SDH backbone uAlkatel 1678 MCC uNode location: Budapest Geneva Frankfurt Milan Paris Poznan Prague Vienna uCan do traffic routing so make long rtt paths uIdeal: London Copenhagen

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 9 4 Gigabit GÉANT LightPath uExample of a 4 Gigabit Lightpath Between GÉANT PoPs PCs in Dante PoPs 26 * VC-4s 4180 Mbit/s

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 10 PCs and Current Tests

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 11 Test PCs Have Arrived u Boston/Supermicro X7DBE u Two Dual Core Intel Xeon Woodcrest GHz Independent 1.33GHz FSBuses u 530 MHz FD Memory (serial) uChipsets: Intel 5000P MCH – PCIe & Memory ESB2 – PCI-X GE etc. u PCI 3 8 lane PCIe buses 3* 133 MHz PCI-X u 2 Gigabit Ethernet u SATA

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 12 Bandwidth Challenge wins Hat Trick uThe maximum aggregate bandwidth was >151 Gbits/s 130 DVD movies in a minute serve 10,000 MPEG2 HDTV movies in real-time u22 10Gigabit Ethernet waves Caltech & SLAC/FERMI booths In 2 hours transferred TByte 24 hours moved ~ 475 TBytes uShowed real-time particle event analysis uSLAC Fermi UK Booth: 1 10 Gbit Ethernet to UK NLR&UKLight: transatlantic HEP disk to disk VLBI streaming 2 10 Gbit Links to SALC: rootd low-latency file access application for clusters Fibre Channel StorCloud 4 10 Gbit links to Fermi Dcache data transfers SC Gbit/s In to booth Out of booth

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 13 SC|05 Seattle-SLAC 10 Gigabit Ethernet u2 Lightpaths: Routed over ESnet Layer 2 over Ultra Science Net u6 Sun V20Z systems per λ udcache remote disk data access 100 processes per node Node sends or receives One data stream Mbit/s uUsed Neteion NICs & Chelsio TOE uData also sent to StorCloud using fibre channel links uTraffic on the 10 GE link for 2 nodes: 3-4 Gbit per nodes Gbit on Trunk

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 14 Lab Tests 10 Gigabit Ethernet u10 Gigabit Test Lab being set up in Manchester Cisco 7600 Cross Campus λ <1ms Server quality PCs Neterion NICs Chelsio being purchased uB2B performance so far SuperMicro X6DHE-G2 Kernel (2.6.13) & Driver dependent! One iperf TCP data stream 4 Gbit/s Two bi-directional iperf TCP data streams 3.8 & 2.2 Gbit/s UDP Disappointing uPropose to install Fedora Core5 Kernel on the new Intel dual-core PCs

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester Gigabit Ethernet: TCP Data transfer on PCI-X uSun V20z 1.8GHz to 2.6 GHz Dual Opterons uConnect via 6509 uXFrame II NIC uPCI-X mmrbc 4096 bytes 66 MHz uTwo 9000 byte packets b2b uAve Rate 2.87 Gbit/s uBurst of packets length us uGap between bursts 343 us u2 Interrupts / burst CSR Access Data Transfer

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester Gigabit Ethernet: UDP Data transfer on PCI-X uSun V20z 1.8GHz to 2.6 GHz Dual Opterons uConnect via 6509 uXFrame II NIC uPCI-X mmrbc 2048 bytes 66 MHz uOne 8000 byte packets 2.8us for CSRs 24.2 us data transfer effective rate 2.6 Gbit/s u2000 byte packet, wait 0us ~200ms pauses u8000 byte packet, wait 0us ~15ms between data blocks CSR Access 2.8us Data Transfer

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 17 Any Questions?

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 18 More Information Some URLs 1 uUKLight web site: uMB-NG project web site: uDataTAG project web site: uUDPmon / TCPmon kit + writeup: uMotherboard and NIC Tests: & “Performance of 1 and 10 Gigabit Ethernet Cards with Server Quality Motherboards” FGCS Special issue uTCP tuning information may be found at: & uTCP stack comparisons: “Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks” Journal of Grid Computing 2004 uPFLDnet uDante PERT

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 19 uLectures, tutorials etc. on TCP/IP: uEncylopaedia uTCP/IP Resources uUnderstanding IP addresses uConfiguring TCP (RFC 1122) ftp://nic.merit.edu/internet/documents/rfc/rfc1122.txt uAssigned protocols, ports etc (RFC 1010) & /etc/protocols More Information Some URLs 2

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 20 Backup Slides

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 21 Research Activity Policy Middleware Network Resource Mgr Bandwidth on Demand Our Long-Term Vision Ethernet Applications e.g. GRID Ethernet 1678 MCC Applications e.g. GRID Bandwidth Request Bandwidth Request UNI-C Command GMPLS

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 22 SuperComputing

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester Gigabit Ethernet: UDP Throughput u1500 byte MTU gives ~ 2 Gbit/s uUsed byte MTU max user length uDataTAG Supermicro PCs uDual 2.2 GHz Xenon CPU FSB 400 MHz uPCI-X mmrbc 512 bytes uwire rate throughput of 2.9 Gbit/s uCERN OpenLab HP Itanium PCs uDual 1.0 GHz 64 bit Itanium CPU FSB 400 MHz uPCI-X mmrbc 4096 bytes uwire rate of 5.7 Gbit/s uSLAC Dell PCs giving a uDual 3.0 GHz Xenon CPU FSB 533 MHz uPCI-X mmrbc 4096 bytes uwire rate of 5.4 Gbit/s

ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester Gigabit Ethernet: Tuning PCI-X u16080 byte packets every 200 µs uIntel PRO/10GbE LR Adapter uPCI-X bus occupancy vs mmrbc Measured times Times based on PCI-X times from the logic analyser Expected throughput ~7 Gbit/s Measured 5.7 Gbit/s mmrbc 1024 bytes mmrbc 2048 bytes mmrbc 4096 bytes 5.7Gbit/s mmrbc 512 bytes CSR Access PCI-X Sequence Data Transfer Interrupt & CSR Update