Presentation is loading. Please wait.

Presentation is loading. Please wait.

Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005.

Similar presentations


Presentation on theme: "Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005."— Presentation transcript:

1 Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

2 11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 2 Computing and DAQ Session Thursday 14:00 – 17:00 – Theory Seminar Room Handle 20PB a year CBM Grid first steps Controls, not an after sought this time Network & processing p-p, p-A – >10 8 int/sec

3 11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 3 Computing and DAQ Session

4 4 Data rates Data rates into HLPS Open charm 10 kHz * 168 kbyte = 1.7 Gbyte/sec Low-mass di-lepton pairs 25 kHz * 84 kbyte = 2.1 Gbyte/sec Data volume per year – no HLPS action 10 Pbyte/year ALICE = 10 Pbyte/year: 25% raw, 25% reconstructed, 50% simulated slide from D. Rohrich

5 5 Processing concept HLPS’ tasks Event reconstruction with offline quality Sharpen Open Charm selection criteria – reduce event rate further Create compressed ESDs  Create AODs No offline re-processing Same amount of CPU-time needed for unpacking and dissemination of data as for reconstruction RAW->ESD: never ESD->ESD’: only exceptionally slide from D. Rohrich

6 6 Data Compression Scenarios Loss-less data compression –Run-Length Encoding (standard technique) –Entropy coder (Huffman)  –Lempel Ziff Lossy data compression –Compress 10-bit ADC into 8-bit ADC using logarithmic transfer function (standard technique) –Vector quantization  –Data modeling  Perform all of the above wherever possible slide from D. Rohrich

7 7 Offline and online issues Requirements to software –offline code = online code Emphasis on –Run-time performance –Clear interfaces –Fault tolerance and error recovery –Alignment –Calibration –”Prussian” programming slide from D. Rohrich

8 8 Storage concept Main challenge of processing heavy-ion data: logistics No archival of raw data Storage of ESDs –Advanced compressing techniques: 10-20% –Only one pass Multiple versions of AODs slide from D. Rohrich

9 11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, 2005 9 Dubna educational and scientific network Dubna-Grid Project (2004) More than 1000 CPU Laboratory of Information Technologies, JINR University "Dubna" Directorate of programme for development of the science city Dubna University of Chicago, USA University of Lund, Sweden Creation of Grid-testbed on the basis of resources of Dubna scientific and educational establishments, in particular, JINR Laboratories, International University "Dubna“, secondary schools and other organizations slide from V. Ivanov

10 summary (middlewares) ● LCG-2: GSI and Dubna - pro: large distribution, support - contra: difficult to set up, no distributed analysis ● AliEn: GSI, Dubna, Bergen - pro: in production since 2001 - contra: unsecure future, no support Globus 2: GSI, Dubna, Bergen? - pro/contra: simple, but functioning (no RB, no FC, no support) gLite/GT4: new on the market - pro/contra: nobody has production experience (gLite) slide from K.Schwarz

11 11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 11 CBM Grid – Status CBM VO Server setup First certificate in work Use for MC Transport production this summer Initial participants:  Bergen, Dubna, GSI, ITEP Initial Middleware:  AliEn (available on all 4 sites, good working horse)

12 11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 12 ECS (Experiment Control System) Definition of Functionality of ECS and DCS Draft of URD (user requirements document) Constitute ECS working group

13 11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 13 FEE – DAQ Interface FEE Hit Data (out only) Clock and Time (in only) Control (bidirectional) 3 logical interfaces FEE Concentrator or read-out controller Cave Shack 3 Specs: TimeDAQDCS First Drafts ready for fall 2005 CBM TB Meeting Diversity inevitavble Common interfaces indispensible

14 DAQ BNet 11 March 2005GSI, Mar 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet 14 Currently investigated structure switch n × n switch n × n n n * (n - 1) / 2 bidirectional connections n CNetPNet n - 1 n - 1 ports H: histogrammer TG: event tagger HC: histogram collector BC: scheduler DD: data dispatcher ED: event dispatcher TG/BC DD/ED CNetPNet DD/ED H CNet DD/HC active buffer BNet controller n=4 : 16x16 slide from H. Essel

15 DAQ BNet 11 March 2005GSI, Mar 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet 15 Simulation with SystemC event generator data dispatcher (sender) histogram collector tag generator BNet controller (schedule) event dispatcher (receiver) transmitter (data rate, latency) switches (buffer capacity, max. # of package queue, 4K) Running with 10 switches and 100 end nodes. Simulation takes 1.5 *10 5 times longer than simulated time. Various statistics (traffic, network load, etc.) Modules: slide from H. Essel

16 DAQ BNet 11 March 2005GSI, Mar 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet 16 Some statistic examples single buffers excluded! slide from H. Essel

17 DAQ BNet 11 March 2005GSI, Mar 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet 17 Topics for investigations Event shaping Separate meta data transfer system Addressing/routing schemes Broadcast Synchronization Determinism Fault tolerance Real test bed slide from H. Essel

18 11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, 2005 18 Overview of Processing Architecture Processing resources Hardware processors –L1/FPGA Software processors –L1/CPU Active Buffers Sub-farm network –Pnet Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering slide from J. Gläß

19 11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, 2005 19 Architecture of R&D Prototype communication via backplane –4 boards, all-to-all –different length of traces –up to 10 Gbit/s serial –=> FR4 Rogers DDR ZBT FPGA connector SFPSFP XC2VPX20 Flash RS232 Ethernet PPC 2 2 2 2 zeroXT 10GB SMT Ethernet Flash DDR Linux µC FPGA with MGTs –up to 10 Gbit/s serial –=> XC2VPX20 (8 x MGT) –=> XC2VPX70 (20 x MGT) externals –2 x ZBT SRAM –2 x DDR SDRAM –for PPC: Flash, Ethernet, … initialization and control –standalone board/system –microcontroller running Linux Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering slide from J. Gläß

20 11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, 2005 20 Conclusion R&D prototype to learn: –physical layer of communication 2.5 Gbit/s up to 10 Gbit/s chip-to-chip board-to-board (-> connectors, backplane) PCB layout, impedances PCB material (FR4, Rogers, …) –next step: communication protocols –more resources needed => XC2VPX70?, Virtex4? (availability?) –external memories fast controllers for ZBT and DDR RAM PCB layout, termination, … Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering slide from J. Gläß

21 11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, 2005 21 DAQ Challenge Incredibly small (unknown) cross-section: pp  x at 90 GeV beam energy Q = 13.4 – 9.5 – 1. -1. = 1.9 GeV ( near threshold) What is the theoretical limit for the hardware and DAQ? How can one improve the sensitivity by clever algorithm? More questions than answers.

22 11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 22 Algorithms Performance of L1 feature extraction algorithms is essential  critical in CBM: STS tracking + vertex reconstruction TRD tracking and Pid Look for algorithms which allow massive parallel implementation  Hough Transform Tracker needs lots of bit level operations, well suited for FPGA  Cellular Automaton Tracker Co-develop tracking detectors and tracking algorithms  L1 tracking is necessarily speed optimized (>10 9 tracks/sec) → possibly more detector granularity and redundancy needed  Aim for CBM: Validate final hardware design with at least 2 trackers suitable for L1


Download ppt "Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005."

Similar presentations


Ads by Google