CALICE TDAQ Application Network Protocols 10 Gigabit Lab

Slides:



Advertisements
Similar presentations
MB - NG MB-NG Technical Meeting 03 May 02 R. Hughes-Jones Manchester 1 Task2 Traffic Generation and Measurement Definitions Pass-1.
Advertisements

DataTAG CERN Oct 2002 R. Hughes-Jones Manchester Initial Performance Measurements With DataTAG PCs Gigabit Ethernet NICs (Work in progress Oct 02)
G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
CALICE, Mar 2007, R. Hughes-Jones Manchester 1 Protocols Working with 10 Gigabit Ethernet Richard Hughes-Jones The University of Manchester
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Meeting on ATLAS Remote Farms. Copenhagen 11 May 2004 R. Hughes-Jones Manchester Networking for ATLAS Remote Farms Richard Hughes-Jones The University.
CdL was here DataTAG/WP7 Amsterdam June 2002 R. Hughes-Jones Manchester 1 EU DataGrid - Network Monitoring Richard Hughes-Jones, University of Manchester.
I/O Channels I/O devices getting more sophisticated e.g. 3D graphics cards CPU instructs I/O controller to do transfer I/O controller does entire transfer.
IEEE Real Time 2007, Fermilab, 29 April – 4 May R. Hughes-Jones Manchester 1 Using FPGAs to Generate Gigabit Ethernet Data Transfers & The Network Performance.
DataGrid WP7 Meeting CERN April 2002 R. Hughes-Jones Manchester Some Measurements on the SuperJANET 4 Production Network (UK Work in progress)
JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester Brief Report on Tests Related to the e-VLBI Project Richard Hughes-Jones The University.
T2UK RAL 15 Mar 2006, R. Hughes-Jones Manchester 1 ATLAS Networking & T2UK Richard Hughes-Jones The University of Manchester then.
CALICE UCL, 20 Feb 2006, R. Hughes-Jones Manchester 1 10 Gigabit Ethernet Test Lab PCI-X Motherboards Related work & Initial tests Richard Hughes-Jones.
PFLDNet Argonne Feb 2004 R. Hughes-Jones Manchester 1 UDP Performance and PCI-X Activity of the Intel 10 Gigabit Ethernet Adapter on: HP rx2600 Dual Itanium.
© 2006 Open Grid Forum Interactions Between Networks, Protocols & Applications HPCN-RG Richard Hughes-Jones OGF20, Manchester, May 2007,
CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 1 European Topology: NRNs & Geant SuperJANET4 CERN UvA Manc SURFnet RAL.
ESLEA Bits&Bytes, Manchester, 7-8 Dec 2006, R. Hughes-Jones Manchester 1 VLBI & Protocols vlbi_udp Multiple Flow Tests Richard Hughes-Jones The University.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
Can Google Route? Building a High-Speed Switch from Commodity Hardware Guido Appenzeller, Matthew Holliman Q2/2002.
Router Architectures An overview of router architectures.
GGF4 Toronto Feb 2002 R. Hughes-Jones Manchester Initial Performance Measurements Gigabit Ethernet NICs 64 bit PCI Motherboards (Work in progress Mar 02)
Sven Ubik, Petr Žejdl CESNET TNC2008, Brugges, 19 May 2008 Passive monitoring of 10 Gb/s lines with PC hardware.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
1.  Project Goals.  Project System Overview.  System Architecture.  Data Flow.  System Inputs.  System Outputs.  Rates.  Real Time Performance.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
EVN-NREN Meeting, Zaandan, 31 Oct 2006, R. Hughes-Jones Manchester 1 FABRIC 4 Gigabit Work & VLBI-UDP Performance and Stability. Richard Hughes-Jones The.
“ PC  PC Latency measurements” G.Lamanna, R.Fantechi & J.Kroon (CERN) TDAQ WG –
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester 1 Protocols Progress with Current Work. Richard Hughes-Jones The University of Manchester.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
MB - NG MB-NG Meeting Dec 2001 R. Hughes-Jones Manchester MB – NG SuperJANET4 Development Network SuperJANET4 Production Network Leeds RAL / UKERNA RAL.
ESLEA Bits&Bytes, Manchester, 7-8 Dec 2006, R. Hughes-Jones Manchester 1 Protocols DCCP and dccpmon. Richard Hughes-Jones The University of Manchester.
Robin HJ & R. Hughes-Jones Manchester Sep 1999 Gigabit Ethernet in Ptolemy Status Sep 99 : Stars that exist : –GigEChipTranslate between GigEPacket and.
Collaboration Meeting, 4 Jul 2006, R. Hughes-Jones Manchester 1 Collaborations in Networking and Protocols HEP and Radio Astronomy Richard Hughes-Jones.
High bit rate tests between Manchester and JIVE Looking at data rates attainable with UDP along with packet loss and reordering statistics Simon Casey,
Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester 1 ATLAS TDAQ Networking, Remote Compute Farms & Evaluating SFOs Richard Hughes-Jones The.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
Networking update and plans (see also chapter 10 of TP) Bob Dobinson, CERN, June 2000.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
1 eVLBI Developments at Jodrell Bank Observatory Ralph Spencer, Richard Hughes- Jones, Simon Casey, Paul Burgess, The University of Manchester.
L1/HLT trigger farm Bologna setup 0 By Gianluca Peco INFN Bologna Genève,
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Connect. Communicate. Collaborate 4 Gigabit Onsala - Jodrell Lightpath for e-VLBI Richard Hughes-Jones.
DataGrid WP7 Meeting Jan 2002 R. Hughes-Jones Manchester Initial Performance Measurements Gigabit Ethernet NICs 64 bit PCI Motherboards (Work in progress)
MB MPLS MPLS Technical Meeting Sep 2001 R. Hughes-Jones Manchester SuperJANET Development Network Testbed – Cisco GSR SuperJANET4 C-PoP – Cisco GSR.
LISA Linux Switching Appliance Radu Rendec Ioan Nicu Octavian Purdila Universitatea Politehnica Bucuresti 5 th RoEduNet International Conference.
Hub v.s. Switch Qualnet Exercise 1.
High Speed Optical Interconnect Project May08-06
Network speed tests CERN 14-dec-2010.
Ingredients 24 x 1Gbit port switch with 2 x 10 Gbit uplinks  KCHF
Service provider PROVIDES STM-1
PC Farms & Central Data Recording
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
R. Hughes-Jones Manchester
Networking between China and Europe
CMS DAQ Event Builder Based on Gigabit Ethernet
Latency Measurement Testing
Mar 2001 ATLAS T2UK Meeting R. Hughes-Jones
MB-NG Review High Performance Network Demonstration 21 April 2004
NET 536 Network Security Lab 1: TCP IP Attacks
Packet Switch Architectures
MB – NG SuperJANET4 Development Network
Visible routers in Visible network
Reference Router on NetFPGA 1G
Cisco Routers Presented By Dr. Waleed Alseat Mutah University.
Cluster Computers.
Presentation transcript:

CALICE TDAQ Application Network Protocols 10 Gigabit Lab Richard Hughes-Jones The University of Manchester www.hep.man.ac.uk/~rich/ then “Talks” Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester

Collecting Data over the Network Planks Concentrator Ethernet Switches Output link Bottleneck Queue ●●● Suggested to trigger the planks to send all the data, though concentrators to one processing node Emulate this using two racks from the Tier2 farm Classic bottleneck problem for the switch Processing Nodes 1 Burst / Node Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester

Measurements with two flows Constant rate flow 790 Mbit/s Second flow with bursts of 50 packets at 790 Mbit/s every 4 ms or 2 ms Measure the one-way delay of the constant rate flow Bursts of 50 packets 2 ms apart Bursts of 50 packets 4 ms apart With 2ms burst spacing loose packets at the end of each burst With 2ms burst spacing Switch queue has time to empty. ~128k byte memory per port in Dell powerconnect 5324 switch Better to use a request-response protocol to transfer the data Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester

Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester 10 Gigabit Ethernet Lab Cisco 7600 switch installed PCs Boston/Supermicro X7DBE Two Dual Core Intel Xeon Woodcrest 5130 CPUs 2 GHz 8 lane PCI-e Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester

Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester 10 Gigabit Ethernet Lab PC with 10 Gig Ethernet NIC Myricom Fibre 10 Gig Ethernet NIC Myricom CX4 copper 10 Gig Ethernet NIC Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester

Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester Any Questions? Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester