Presentation is loading. Please wait.

Presentation is loading. Please wait.

ALICE High Level Trigger Interfaces and Data Organisation CHEP 2006 – Mumbai (13.02. – 17.02.2006) Sebastian Robert Bablok, Matthias Richter, Dieter Roehrich,

Similar presentations


Presentation on theme: "ALICE High Level Trigger Interfaces and Data Organisation CHEP 2006 – Mumbai (13.02. – 17.02.2006) Sebastian Robert Bablok, Matthias Richter, Dieter Roehrich,"— Presentation transcript:

1 ALICE High Level Trigger Interfaces and Data Organisation CHEP 2006 – Mumbai (13.02. – 17.02.2006)
Sebastian Robert Bablok, Matthias Richter, Dieter Roehrich, Kjetil Ullaland (Department of Physics and Technology, University of Bergen, Norway) Torsten Alt, Volker Lindenstruth, Timm M. Steinbeck, Heinz Tilsner (Kirchhoff Institute of Physics, Ruprecht-Karls-University Heidelberg, Germany) Harald Appelshaeuser (Institute for Nuclear Physics, University of Frankfurt, Germany) Haavard Helstrup (Faculty of Engineering, Bergen University College, Norway) Bernhard Skaali, Thomas Vik (Department of Physics, University of Oslo, Norway) Cvetan Cheshkov (CERN, Geneva, Switzerland) for the ALICE Collaboration

2 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format
HLT - GRID Interface Summary & Outlook

3 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format
HLT - GRID Interface Summary & Outlook

4 HLT Overview Purpose: Architecture:
online event reconstruction and analysis providing of trigger decisions to DAQ monitoring and performance analysis of the ALICE detectors usage of its spare computer power (HLT cluster) for GRID Architecture: ca 500 node cluster (similar to Tier 2) dual SMP board with dual core CPUs (  up to 2000 processing units in sum) Gigabit Ethernet includes up to 300 FEP (Front-End-Processing) nodes receiving data from detectors dynamic software data transport framework (PubSub- and TaskManager system)

5 HLT Overview (DDL) (300 FEP)

6 Online / Offline Software
HLT Overview Detector Interfaces: raw data input copied via D-RORC from DAQ output of processed data and trigger decisions to the DAQ LDCs ECS interface as integration into the ALICE controlling system DCS interface to receive configuration data AliRoot interface to analysis code and for online monitoring AliEn interface to use idle nodes for GRID computing internal node interfaces using the PubSub - and TaskManager system L2 Accept Trigger ECS HLT – cluster GRID DCS Online / Offline Software Monitoring, Logging HLT Accept Event Building / DAQ

7 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format
HLT - GRID Interface Summary & Outlook

8 HLT - ECS Interface Control of HLT through
Experiment Control System (ECS) for initialisation and runtime operations Interface: represented as well defined states and transitions managed by Finite State Machines (FSM) ECS controls the HLT via HLT proxy implemented in SMI++ (external communication via DIM) internal communication is using interface library of TaskManager control system

9 HLT - ECS Interface HLT proxy states and transitions:
subdivided into two type of states “stable” states (OFF, INITIALIZED, CONFIGURED, READY, RUNNING, ERROR): transitions only via well defined commands intermediate states (RAMPING_UP_&_INITIALIZING, CONFIGURING, ENGAGING, DISENGAGING, RAMPING_DOWN): longer procedure of transition – implicit transition, when procedure is finished configuration is passed as parameters of configure command mode of the HLT: “A” (no HLT)  no output to DAQ “B” (data processing only)  processed event data to DAQ, HLT trigger decisions ignored by DAQ “C” (full HLT functionality)  HLT trigger and pre-processed event data provided to DAQ output data format: output format version number of data sent back to DAQ trigger classes: tag identifying the desired physics triggers list of DDLs: on which H-RORCs receive event data for the HLT

10 HLT proxy - States ERROR OFF
reset ERROR shutdown OFF initialize implicit transtion RAMPING_DOWN << intermediate state>> RAMPING_UP_&_INITIALIZING << intermediate state>> shutdown implicit transtion INITIALIZED configure CONFIGURING << intermediate state>> reset implicit transtion CONFIGURED implicit transtion engage DISENGAGING << intermediate state>> ENGAGING << intermediate state>> disengage READY implicit transtion Nur einfachen Weg beschreiben !!! start HLT proxy - States stop RUNNING

11 HLT - ECS Interface Implementation:
SMI++ (common among all ALICE systems) single instance not partitioned HLT proxy is only active during (de-) initialisation- and configuration phase, in RUNNING state the cluster processes event data autonomously internal communication: interface library of TaskManager (TM) control system HLT proxy contacts to multiple master TMs of the cluster (majority vote – fault tolerance, avoiding single-point-of-failure) master TMs are networked with all slave TMs slave TMs control the PubSub system (RORC publisher and others) HLT proxy not sensitive once running is reached

12 …this picture with all arrows/connection actual in the system I would get even more confused than you

13 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format
HLT - GRID Interface Summary & Outlook

14 HLT - DAQ Data Format HLT – output to DAQ
DAQ handles HLT output like data from any other detector (except for the trigger decision). HLT output is received by the DDLs on D-RORCs (DAQ - ReadOut Receiver Cards) located on the LDCs (Local Data Concentrators).

15 HLT - DAQ Data Format Output data format: trigger decision:
list of DDLs to read (detector DDLs [e.g. RoI] and HLT DDLs) Event Summary Data (ESD) in case of an accept candidate ID (ID from used trigger class) preprocessed event data [optional]: (partially) reconstructed event compressed raw data preprocessed detector specific data (e.g. Photon Spectrometer pulse shape analysis) Start-Of-Data (SOD) and End-Of-Data (EOD)

16 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format
HLT - GRID Interface Summary & Outlook

17 HLT - GRID Interface HLT cluster is similar to Tier 2 center:
 up to 2000 processing units Cluster usage: HLT processing (ALICE data taking periods) GRID job execution (non data taking periods and unused nodes during data taking)  priority lies on the HLT functionality GRID middleware in ALICE is AliEn (Alice Environment)  HLT cluster offers Computing Element (CE) functionality to AliEn  due to priority in HLT tasks no (permanent) Storage Element (SE) In pp is luminosity low -> not all nodes used

18 HLT - GRID Interface Dedicated node acts as GRID portal node:
has AliEn installed contacts AliEn master takes care of certificates and authentication administration of GRID nodes includes unused nodes into GRID informs AliEn master about estimated Time-To-Live of included nodes (TTL) excludes HLT required nodes from GRID uses Condor as batch system underneath  Condor also installed on each node sharing of GRID applications inside cluster via network

19 HLT - GRID Interface Free nodes administration (1):
unused nodes (for HLT) are collected in “buffered” pool pool consists of three regions divided by two thresholds: GRID usage: nodes above the soft reserve threshold are offered for GRID jobs. below soft reserve: no more nodes are offered for GRID, unused nodes from the HLT are only collected in reserve, nodes finishing GRID jobs are recollected to reserve ( “soft” stop), nodes needed for HLT are taken from the pool. below hard reserve: amount of reserve nodes is below the safety minimum, GRID nodes are immediately recollected (GRID jobs are canceled) to the reserve ( “hard” stop) until hard reserve threshold is exceeded again. pool GRID usage soft reserve reserve hard reserve

20 HLT - GRID Interface Free nodes administration (2):
 aim of the hard reserve: immediately available spare nodes for HLT  aim of the soft reserve: avoiding “hard” stops as far as possible a sophisticated algorithm will estimate TTL of free nodes (this will develop together with the experience gathered during the use of the HLT cluster)  non data taking periods in this context are just special cases of the normal pool administration

21 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format
HLT - GRID Interface Summary & Outlook

22 Summary & Outlook HLT: will consist of a large PC cluster (similar to Tier 2) internally managed by TaskManager control system overall system to be configured and controlled by ECS  via well defined states and transitions (commands)  interface implemented in SMI++  internal communication to multiple master TaskManagers via own interface library will provide (pre-) processed data and trigger decisions to DAQ planned to offer unused nodes for GRID computing  organised in “buffered” pool (three regions divided by two thresholds)  interface to AliEn, Condor as batch system  utilisation of significant computing power for multiple applications

23 HLT proxy – States (additional slide)

24 HLT - GRID Interface (additional slide)


Download ppt "ALICE High Level Trigger Interfaces and Data Organisation CHEP 2006 – Mumbai (13.02. – 17.02.2006) Sebastian Robert Bablok, Matthias Richter, Dieter Roehrich,"

Similar presentations


Ads by Google