Presentation is loading. Please wait.

Presentation is loading. Please wait.

Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg Alice High Level.

Similar presentations


Presentation on theme: "Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg Alice High Level."— Presentation transcript:

1 Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg Alice High Level Trigger Data Transport Framework Software

2 High Level Trigger Farm ● HLT Linux-PC farm (500-1000 nodes) with fast network ● Nodes are arranged hierarchically ● First stage reads data from detector and performs first level processing ● Each stage sends produced data to next stage ● Each further stage performs processing or merging on input data

3 High Level Trigger The HLT needs a software framework to transport data through the PC farm. ● Requirements for the framework: – Efficiency: ● The framework should not use too much CPU cycles (which are needed for data analysis) ● The framework should transport the data as fast as possible – Flexibility: ● The framework should consist of components which can be plugged together in different configurations ● The framework should allow reconfiguration at runtime

4 The Data Flow Framework ● Components are single processes ● Communication based on publisher- subscriber principle: ● One publisher can serve multiple subscribers ● Subscribe once, receive new events ● Subscriber informs when done working on event ● Abstract object-oriented interface hides communication Publisher Subscriber Subscribe Subscriber Subscribe Publisher Subscriber New Event Subscriber New Event Publisher Subscriber Subscribe Publisher Subscriber New Event Publisher Subscriber Event Done Publisher Subscriber New Event

5 Framework Architecture Framework consists of mutually dependant packages Utility Classes Communication Classes Publisher & Subscriber Base Classes Data Source, Sink, & Processing Components Data Source, Sink, & Processing Templates Data Flow Components

6 Utility Classes ● General helper classes (e.g. timer, thread) ● Rewritten classes: – Thread-safe string class – Faster vector class

7 Communication Classes ● Two abstract base classes: – For small message transfers – For large data blocks ● Derived classes for each network technology ● Currently existing: Message and block classes for TCP and SCI (Shared Memory Interconnect) Message Base Class TCP Message Class SCI Message Class Block Base Class TCP Block Class SCI Block Class

8 Communication Classes ● Implementations foreseen for Atoll network (University of Mannheim) and Scheduled Transfer Protocol ● API partly modelled after socket API (Bind, Connect) ● Both implicit and explicit connection possible Implicit Connection User CallsSystem Calls Connectconnect Sendtransfer data Disconnectdisconnect Explicit Connection User CallsSystem Calls Sendconnect, transfer data, disconnect

9 Publisher Subscriber Classes ● Implement publisher- subscriber interface ● Abstract interface for communication mechanism between processes/modules ● Currently named pipes or shared memory available ● Multi-threaded implementation Publisher Subscriber Subscribe Subscriber Subscribe Publisher Subscriber New Event Subscriber New Event Publisher Subscriber Subscribe Publisher Subscriber New Event Publisher Subscriber Event Done Publisher Subscriber New Event

10 Publisher Subscriber Classes ● Efficiency: Event data not sent to subscriber components ● Publisher process places data into shared memory (ideally during production) ● Descriptors are sent to subscribers holding location of data in shared memory ● Requires buffer management in publisher Publisher Event M Data Block n Data Block 0 New Event Event M Descriptor Block n Block 0 Subscriber Shared Memory

11 Event Descriptor Details ● Event descriptors contain ● The ID of the event ● The number of data blocks described ● For each data block: ● The size of the block ● The shared memory ID ● The starting offset in the shared memory ● The type of data ● The ID of the originating node Shared Memory Event M Data Block 0 Event M Data Block n Event ID: M Block count: n Block n ● Shm ID ● Offset ● Size ● Datatype ● Producer ID Block 0 ● Shm ID ● Offset ● Size ● Datatype ● Producer ID

12 Persistent and Transient Two kinds of subscribers supported: Persistent and Transient ● Persistent subscribers get all events. ● Publisher frees events only when all persistent subscribers are finished. ● Transient subscribers can specify event selection criteria (Event number modulo, trigger words). ● Transient subscribers can have events canceled by the publisher. Subscriber Publisher Subscriber Publisher Cancel Event New Event Publisher Subscriber Publisher Subscriber Publisher Event Done New Event Free EventFree Event Publisher Free EventFree Event

13 Data Flow Components ● To merge data streams belonging to one event ● To split and rejoin a data stream (e.g. for load balancing) EventMerger Event N, Part 1 Event N, Part 2 Event N, Part 3 Event N Several components to shape the data flow in a chain EventScattererEventGatherer Event 1 Event 2 Event 3 Event 1 Event 3 Processing Event 1 Event 2 Event 3

14 Data Flow Components ● To transparently transport data over the network to other computers (Bridge) ● SubscriberBridgeHead has subscriber class for incoming data, PublisherBridgeHead uses publisher class to announce data Several components to shape the data flow in a chain SubscriberBridgeHead Node 1 Network PublisherBridgeHead Node 2

15 Event Scatterer ● One subscriber input. ● Multiple publisher outputs. ● Incoming events are distributed to output publishers in round-robin fashion. Subscriber Publisher New Event 2 Subscriber Publisher New Event 1

16 Event Gatherer ● Multiple subscribers attached to different publishers. ● One output publisher. ● Each incoming event is published unchanged by output publisher. Publisher Subs New Event 2 Publisher Subs New Event 1

17 Event Merger ● Multiple subscribers attached to different publishers. ● One output publisher. ● Data blocks from incoming events are merged into one outgoing event. Publisher Merging Code Subs Shared Memory Event M Block 1 Event M Block 0 Event M Block 0 Event M Block 1 Event M Block 0 Block 1

18 Bridge ● Bridge consists of two programs: ● SubscriberBridgeHead ● Contains subscriber. ● Gets input data from publisher. ● Sends data over network. ● PublisherBridgeHead ● Reads data from network. ● Publishes data again. Node B Data consumer Publisher Subscriber Bridge Network Code New Event Node A Data producer Publisher Subscriber Bridge Network Code New Event Network data transport

19 Component Template Templates for user components are provided ● To read out data from source and insert it into a chain (Data Source Template) ● To accept data from the chain, process, and reinsert it (Analysis Template) ● To accept data from the chain and process it, e.g. store it (Data Sink Template) Data Source Addressing & Handling Buffer Management Data Announcing Data Analysis & Processing Output Buffer Mgt. Output Data Announcing Accepting Input Data Input Data Addressing Data Sink Addressing & Writing Accepting Input Data Input Data Addressing

20 HLT Fault Tolerance ● 500 - 1000 PCs in the Alice HLT (or more). ● Round the clock operation ● Cost for one hour of operation of Alice >50.000 CHF. What to do when one of the PCs fails????

21 HLT Fault Tolerance Each level of processing in the (TPC-) HLT splits its output and distributes it among multiple nodes of the next procesing stage. Optional normally unused (hot) spare nodes may be available.

22 HLT Fault Tolerance If one of the nodes fails the data is distributed among the remaining nodes. Or one of the spare nodes is activated and takes the part of the faulty node.

23 HLT Fault Tolerance Test Test of the HLT Fault Tolerance Capability One node distributes output data to three further worker nodes (plus one spare). Output data of the three nodes is collected by a sixth node. For one of the three worker nodes the network cable is unplugged.s

24 HLT Fault Tolerance Test After the "failure" of the node of the node the data is processed by the spare node which has been activated. No loss of data or performance. (Only temporary performance loss until switched) Switch is done completely automatic, without any user intervention needed. Test of the HLT Fault Tolerance Capability - Results

25 HLT Fault Tolerance

26 Benchmarks ● Performance tests of framework ● Dual Pentium 3, w. 733 MHz-800MHz ● Tyan Thunder 2500 or Thunder HEsl motherboard, Serverworks HE/HEsl chipset ● 512MB RAM, >20 GB disk space, system, swap, tmp ● Fast Ethernet, switch w. 21 Gb/s backplane ● SuSE Linux 7.2 w. 2.4.16 kernel ● Benchmarks run w. unoptimized debugging code on kernel with multiple debugging options.  Can only get better

27 Benchmarks Benchmark of publisher-subscriber interface ● Publisher process ● 16 MB output buffer, event size 128 B ● Publisher does buffer management, copies data into buffer, subscriber just replies to each event ● Maximum performance more than 12.5 kHz

28 Benchmarks Benchmark of TCP message class ● Client sending messages to server on other PC ● TCP over Fast Ethernet ● Message size 32 B ● Maximum message rate more than 45 kHz

29 Benchmarks Publisher-Subscriber Network Benchmark ● Publisher on node A, subscriber on node B ● Connected via Bridge, TCP over Fast Ethernet ● 31 MB buffer in publisher and receiving bridge ● Data size from 128 B to 1 MB SubscriberBridgeHead Node A Network PublisherBridgeHead Node B Subscriber Publisher

30 Benchmarks Publisher-Subscriber Network Benchmark Notes: Dual CPU nodes 100% ≙ 2 CPUs Theor. Rate: 100 Mb/s / Evt. Size

31 Benchmarks ● CPU load/event rate decrease with larger blocks ● Receiver more loaded than sender ● Minimum CPU load 20% sender, 30% receiver ● Maximum CPU load @ 128 B events 90% receiver, 80% sender Management of > 250.000 events in system!!

32 Benchmarks Publisher-Subscriber Network Benchmark

33 Benchmarks ● 32 kB event size: Network bandwidth limits ● 32 kB event size: More than 10 MB/s ● At maximum event size: 12.3*10 6 B of 12.5*10 6 B

34 "Real-World" Test ● 13 node test setup ● Simulation of read-out and processing of 1/36 (slice) of Alice Time Projection Chamber (TPC) ● Simulated piled-up (overlapped) proton-proton events ● Target processing rate: 200 Hz (Maximum read out rate of TPC)

35 "Real-World" Test EG ES CF AU FP CF AU FP CF AU FP CF AU FP CF AU FP CF AU FP EG ES T T EG ES EG ES T T EG ES T T T T EG ES FP CF AU T T EG ES T T ADC Unpacker Event Scatterer Tracker (2x) EM PM EM SM EM PM EM SM Patch Merger Slice Merger Patches165432 TT Event Merger Event Gatherer Cluster Finder File Publisher

36 "Real-World" Test CF AU FP 1, 2, 123, 255, 100, 30, 5, 4*0, 1, 4, 3*0, 4, 1, 2, 60, 130, 30, 5,.......... 1, 2, 123, 255, 100, 30, 5, 0, 0, 0, 0, 1, 4, 0, 0, 0, 4, 1, 2, 60, 130, 30, 5,.......... EG ES T T EM PM EM SM

37 "Real-Word" Test ● Third (line of) node(s) connects track segments to form complete tracks (Track Merging) ● Second line of nodes finds curved particle track segments going through charge space-points ● First line of nodes unpacks zero- suppressed, run-length-encoded ADC values and calculates 3D space-point coordinates of charge depositions CF AU FP EG ES T T EM PM EM SM

38 Test Results CF AU FP EG ES T T EM PM EM SM Rate: 270 Hz ● CPU load: 2 * 100% ● Network: 6 * 130 kB/s event data ● CPU load: 2 * 75% - 100% ● Network: 6 * 1.8 MB/s event data ● CPU load: 2 * 60% - 70%

39 Conclusion & Outlook ● Framework allows flexible creation of data flow chains, while still maintaining efficiency ● No dependencies created during compilation ● Applicable to wide range of tasks ● Performance already good enough for many applications using TCP on Fast Ethernet Future work: Use dynamic configuration ability for fault tolerance purposes, improvements of performance More information: http://www.ti.uni-hd.de/HLT http://www.ti.uni-hd.de/HLT


Download ppt "Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg Alice High Level."

Similar presentations


Ads by Google