Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overview of DAQ at CERN experiments E.Radicioni, INFN - 20050901 - MICE Daq and Controls Workshop.

Similar presentations


Presentation on theme: "Overview of DAQ at CERN experiments E.Radicioni, INFN - 20050901 - MICE Daq and Controls Workshop."— Presentation transcript:

1 Overview of DAQ at CERN experiments E.Radicioni, INFN - 20050901 - MICE Daq and Controls Workshop

2 Overview DAQ architectures Implementations Software and tools

3 Some basic parameters

4 Overall architecture This is LHCb, but at this level they all look the same. Buffering on the FE, waiting for LV-0/1 decision Building network HLT filtering before storage.

5 This is already a standard PC with Ethernet … but they DO differ right after the FEs ALICECMS This is the entry point into a customized Myrinet network 1 EVB level 2 EVB levels

6 DAQ: taking care of Data Flow Building Processing Storage Get the data to some network as fast as possible Custom VS standard network technology CMS & Alice as extremes –The others are in between

7 CMS approach Only 1 hardware trigger, do the rest in HLT  high DAQ bandwidth. –Flexible trigger, but rigid DAQ. Also partitioning less flexible. Barrel-shifter approach: deterministic but rigid, customized network 2-stage building

8 ALICE approach HLT embedded in the DAQ More than one hardware trigger  not straightforward to change trigger –But DAQ can be very flexible with standard technology. Easy to partition down to the level of the single front-end HLT / Monitoring

9 Alice HLT

10 List of DAQ functions (one may expect) Run control (state machine) with GUI –Configure DAQ topology –Select trigger –Start/stop (by user or by DCS) –Communicate status to DCS Partitioning. Its importance is never stressed enough. Minimal set of hardware access libraries (VME, USB, S-LINK), and ready-to-use methods to initialize interfaces. Data flow –Push (or pull …) data from FE to Storage via (one or more) layer of Event- Building DAQ performance check with GUI Data quality monitoring (or framework to do it) –GUI most likely external Logging (DAQ-generated messages) with GUI to select/analyze logs

11 What can you expect to be able to use out of these systems? MICE  test beam system Who’s providing the best “test beam” system? –Reduced scale, but keeping rich functionality –And software already available –Not only framework, but also ready-to-use applications and GUIs.

12 All experiments more or less implement this: Main data flow ~ 10-100 KHz Spying data subset for monitoring Also good for test beams, ~ 1KHz Support for test beams vary from one experiment to the other, from barebone system (just framework) to full-fledged support

13 CMS: public-domain framework, called xdaq: http://xdaq.web.cern.ch/xdaq/ Just framework (data and message passing, event-builder). For the rest, you are on your own. http://xdaq.web.cern.ch/xdaq/ ALICE tends to support its detector teams with a full set of DAQ tools –Partitioning –Data transport –Monitoring –Logging ATLAS similar to ALICE in this respect –However, at the times of HARP construction it was not yet ready for release to (external) groups.

14 Readout Clear separation of readout and recording functions Readout high- priority (or read- time), recorder low priority (quasi-async) Large memory buffer to accommodate for fluctuations

15 User-provided functions A clear, simple way for the user to initialize and read out its own hardware

16 Event builder Running on a standard PC Able to perform, at the same time: –Global or partial on-demand building –EVB strategies to be matched to trigger configurations –Event consistency checks –Recording to disk –Serving events to subscribers (i.e. monitoring) With basic data selections Possibly with multi-staging after the event-building

17 Event format: headers and payloads, one payload per front-end Precise time-stamping, numbering and Ids on headers of each payload

18 Ready-to-use GUIs Run control should be implemented as a state machine for proper handling of state change Configure and partition Set run parameters and connect Select active processes and start them Start/stop

19 Run-control One run-control agent per DAQ “actor”

20 Run-control partitioning Warning: to take advantage of DAQ partitioning, also TRIGGER has to support partition …  Requirement to TRIGGER system

21 Logging Informative logging with an effective user interface Log filtering and archiving Run statistic collection and reporting also useful

22 Monitoring A ready-to-use monitoring architecture With a monitoring library as a software interface Depending on DAQ systems, a global monitoring GUI (to be extended for specific needs) might be available already

23 Recent trends: the ECS An overall controller of the complete status of the experiment, including DCS Partitionable state machine Configuration databases Usually interfaced to PVSS, but EPICS should also be possible

24 Conclusions: never underestimate … Users are not experts: provide them the tools to work and to report problems effectively. Flexible partitioning. Event-building with accurate fragment alignment and validity checks, state reporting and reliability. Redundancy / fault tolerance A proper run-control with state-machine –And simplifying to the users the tasks of Partition, Configure, Trigger selection, Start, Stop A good monitoring framework, with clear-cut separation between DAQ services and monitoring clients Extensive and informative logging GUIs


Download ppt "Overview of DAQ at CERN experiments E.Radicioni, INFN - 20050901 - MICE Daq and Controls Workshop."

Similar presentations


Ads by Google