Presentation is loading. Please wait.

Presentation is loading. Please wait.

NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH 20.10.081.

Similar presentations


Presentation on theme: "NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH 20.10.081."— Presentation transcript:

1 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.081 Work supported by EU RP6 project JRA1 FutureDAQ RII3-CT-2004-506078 High Performance Event Building over InfiniBand Networks J.Adamczewski-Musch, H.G.Essel, N.Kurz, S.Linev GSI Helmholtzzentrum für Schwerionenforschung GmbH Experiment Electronics, Data Processing group CBM data acquisition Event building network InfiniBand & OFED Performance tests Data Acquisition Backbone Core DABC

2 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.082 CBM DAQ features summary Complex trigger algorithms on full data: Self-triggered front-end electronics. Time stamped data channels. Transport full data into filter farm. Data sorting over switched network on full data rate of ~1TB/s. Sorting network: ~1000 nodes. Is that possible (in 2012)?

3 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.083 Data flow principle Merge channels Sort over switched network, units are not events, but time slice data! Distribute complete data Detector electronics, time stamped data channels Processor farms, event definition, filtering, archiving Up to 1000 lines 1 GByte/sec each Partial data Complete data

4 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.084 Software developments Software packages developed: 1.Simulation with SystemC (flow control, scheduling) Meta data on data network 2.Real dataflow core (round robin, with/without sychronization) Linux, InfiniBand, GB Ethernet Simulates data sources 3.Data Acquisition Backbone Core DABC (includes dataflow core) Controls, Configuration, Monitoring, GUI... Real data sources General purpose DAQ framework Let's have a look to these Poster N30-98: (Wednesday, 10:30) DABC, a Data Acquisition Backbone Core Library.

5 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.085 SystemC simulations 100x100 (1) data meta data no traffic shaping 40% traffic shaping 95% Optimization of network utilization for variable sized buffers. Enhancement up to factor two. Same network used for data and meta data. Occupancy of network by different kind of data

6 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.086 Real implementation: InfiniBand (2) High-performance interconnect technology – switched fabric architecture – up to 20 GBit/s bidirectional serial link – Remote Direct Memory Access (RDMA) – quality of service – zero-copy data transfer – low latency (few microseconds) –common in HPC Software library: –OpenFabrics (OFED) package see www.openfabrics.org for more informationwww.openfabrics.org InfiniBand switch of the GSI cluster

7 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.087 OpenFabrics software stack* SASubnet Administrator MADManagement Datagram SMASubnet Manager Agent PMAPerformance Manager Agent IPoIBIP over InfiniBand SDPSockets Direct Protocol SRPSCSI RDMA Protocol (Initiator) iSERiSCSI RDMA Protocol (Initiator) RDSReliable Datagram Service UDAPLUser Direct Access Programming Lib HCAHost Channel Adapter R-NICRDMA NIC Common InfiniBand iWARP Key InfiniBand HCAiWARP R-NIC Hardware Specific Driver Hardware Specific Driver Connection Manager MAD InfiniBand OpenFabrics Kernel Level Verbs / API iWARP R-NIC SA Client Connection Manager Connection Manager Abstraction (CMA) InfiniBandOpenFabrics User Level Verbs / APIiWARP R-NIC SDPIPoIBSRPiSERRDS SDP Lib User Level MAD API Open SM Diag Tools Hardware Provider Mid-Layer Upper Layer Protocol User APIs Kernel Space User Space NFS-RDMA RPC Cluster File Sys Application Level SMA Clustered DB Access Sockets Based Access Various MPIs Access to File Systems Block Storage Access IP Based App Access Apps & Access Methods for using OF Stack UDAPL Kernel bypass * slide from www.openfabrics.org Subnet Administrator: Multicast Subnet manager: Configuration

8 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.088 InfiniBand basic tests Hardware for testing: GSI (November 2006) - 4 nodes, single data rate SDR Forschungszentrum Karlsruhe* (March 2007) – 23 nodes, double data rate DDR UNI Mainz** (August 2007) - 110 nodes, DDR SDR (GSI)DDR (Mainz) Unidirectional0.98 GB/s1.65 GB/s Bidirectional0.95 GB/s1.3 GB/s Point-to-point tests 2 KByteRate per node GSI (4 nodes)0.625 GB/s Mainz (110 nodes)0.225 GB/s Multicast tests * thanks to Frank Schmitz, Ivan Kondov and Project CampusGrid in FZK ** thanks to Klaus Merle and Markus Tacke at the Zentrum für Datenverarbeitung in Uni Mainz

9 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.089 Scaling of non-synchronized round robin traffic

10 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.0810 Effect of sychronization and topology optimization + 25% Topology optimization for 110 nodes would have required different cabling of switches!

11 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.0811 Motivation for DABC (3) 2004 EU RP6 project JRA1 FutureDAQ* 2004 CBM FutureDAQ for FAIR * RII3-CT-2004-506078 1996 MBS future 50 installations at GSI, 50 external http://daq.gsi.de Use cases Detector tests FE equipment tests Data transport Time distribution Switched event building Software evaluation MBS event builder General purpose DAQ Data Acquisition Backbone Core Intermediate demonstrator Requirements build events over fast networks handle triggered or self-trigger front-ends process time stamped data streams provide data flow control (to front-ends) connect (nearly) any front-ends provide interfaces to plug in application codes connect MBS readout or collector nodes be controllable by several controls frameworks Poster N30-98: (Wednesday, 10:30) DABC, a Data Acquisition Backbone Core Library.

12 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.0812 DABC development branches 1.High speed event building over fast networks 2.Front-end readout chain tests (CBM, September 2008) 3.DABC as MBS* event builder (Ready) 4.DABC with mixed, triggered (MBS) and time stamped, data channels Needs Synchronization of between both Insert event number from trigger to time stamped data stream Read out time stamp from MBS via VME module (to be built) From this, DABC evolved as a general purpose DAQ framework. The main application areas of DABC are: * MultiBranchSystem: standard DAQ system at GSI

13 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.0813 GE: Gigabit Ethernet IB: InfiniBand DABC design: global overview data input sorting tagging filter analysis data input sorting tagging filter analysis IB PC Linux GE analysis filter archive PC frontend DataCombinerr frontend other frontend Readout scheduler DABCDABC TCP PCIe Application codes through plug-ins

14 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.0814 DABC design: Data flow engine A module processes data of one or several data streams. Data streams propagate through ports, which are connected by transports and devices DABC Module port DABC Module port process Device Transport Device Transport Network Object manager locally (by reference) Central data manager Memory pools Bufferqueue Threads

15 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.0815 DABC and InfiniBand DABC fully supports InfiniBand as data transport between nodes –connection establishing –memory management for zero-copy transfer –back-pressure –errors handling –thread sharing GSI InfiniBand cluster with four nodes SDR All to all test

16 NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH http://dabc.gsi.de 20.10.0816 Conclusion InfiniBand is a good candidate for fast event building network Up to 520 MB/s bidirectional data rate is achieved on 110 nodes without optimization. Mixture of point-to-point and multicast traffic is possible Further investigation of scheduling mechanisms is required Further investigation of network topology is required Thank You!


Download ppt "NSS 2008, Dresden, High Performance Event Building over InfiniBand Networks GSI Helmholtzzentrum für Schwerionenforschung GmbH 20.10.081."

Similar presentations


Ads by Google