DAQ for Test&Commissioning at Point 2 Klaus Schossmaier CERN PH-AID ALICE TPC Collaboration Meeting Heidelberg, Germany 12-13 February 2004.

Slides:



Advertisements
Similar presentations
Peter Chochula CERN-ALICE ALICE DCS Workshop, CERN September 16, 2002 DCS – Frontend Monitoring and Control.
Advertisements

TPC DETECTOR SEGMENTATION OF THE READOUT PLANE LATERAL VIEW OF THE TPC
Peter Chochula CERN-ALICE ALICE DCS Workshop, CERN September 16, 2002 DCS – Frontend Monitoring and Control.
DAQ for the TPC Sector Test at Test Beam T10 ALICE DAQ Group ALICE TPC Collaboration Meeting Cagliari, Sardinia 16 – 17 May 2004.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
1P. Vande Vyvre - CERN/PH ALICE DAQ Technical Design Report DAQ TDR Task Force Tome ANTICICFranco CARENA Wisla CARENA Ozgur COBANOGLU Ervin DENESRoberto.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Update on DAQ Klaus Schossmaier CERN PH-AID ALICE TPC Meeting 21 April 2006.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
1 FEE Installation & Commissioning TPC Meeting, 21 April 2006 L. Musa.
Sept TPC readoutupgade meeting, Budapest1 DAQ for new TPC readout Ervin Dénes, Zoltán Fodor KFKI, Research Institute for Particle and Nuclear Physics.
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT for TPC commissioning - Setup - - Status - - Experience -
RCU Status 1.RCU hardware 2.Firmware/Software 3.Test setups HiB, UiB, UiO.
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
ALICE Data Challenge V P. VANDE VYVRE – CERN/PH LCG PEB - CERN March 2004.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
DDL hardware, DATE training1 Detector Data Link (DDL) DDL hardware Csaba SOOS.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Normal text - click to edit RCU – DCS system in ALICE RCU design, prototyping and test results (TPC & PHOS) Johan Alme.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Evaluation of the LDC Computing Platform for Point 2 SuperMicro X6DHE-XB, X7DB8+ Andrey Shevel CERN PH-AID ALICE DAQ CERN 10 October 2006.
ALICE DAQ Plans for 2006 Procurement, Installation, Commissioning P. VANDE VYVRE – CERN/PH for LHC DAQ Club - CERN - May 2006.
1 Alice DAQ Configuration DB
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
DAQ & ECS for TPC commissioning A few statements about what has been done and what is still in front of us F.Carena.
1 Responsibilities & Planning DCS WS L.Jirdén.
The ALICE Data-Acquisition Software Framework DATE V5 F. Carena, W. Carena, S. Chapeland, R. Divià, I. Makhlyueva, J-C. Marin, K. Schossmaier, C. Soós,
4 Dec 2006 Testing the machine (X7DBE-X) with 6 D-RORCs 1 Evaluation of the LDC Computing Platform for Point 2 SuperMicro X7DBE-X Andrey Shevel CERN PH-AID.
RCU Status 1.RCU design 2.RCU prototypes 3.RCU-SIU-RORC integration 4.RCU system for TPC test 2002 HiB, UiB, UiO.
ALICE Computing Model The ALICE raw data flow P. VANDE VYVRE – CERN/PH Computing Model WS – 09 Dec CERN.
Roberto Divià, CERN/ALICE 1 CHEP 2009, Prague, March 2009 The ALICE Online Data Storage System Roberto Divià (CERN), Ulrich Fuchs (CERN), Irina Makhlyueva.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
DDL1 ALICE Detector Data Link (DDL) and it’s use in STAR TOF J. Schambach.
Bernardo Mota (CERN PH/ED) 17/05/04ALICE TPC Meeting Progress on the RCU Prototyping Bernardo Mota CERN PH/ED Overview Architecture Trigger and Clock Distribution.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
NA49-future Meeting, January 26, 20071Ervin Dénes, KFKI - RMKI DATE the DAQ s/w for ALICE (Birmingham, Budapest, CERN, Istanbul, Mexico, Split, Zagreb.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
PCI B ASED R EAD-OUT R ECEIVER C ARD IN THE ALICE DAQ S YSTEM W.Carena 1, P.Csato 2, E.Denes 2, R.Divia 1, K.Schossmaier 1, C. Soos 1, J.Sulyan 2, A.Vascotto.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
R.Divià, CERN/ALICE Challenging the challenge Handling data in the Gigabit/s range.
Filippo Costa ALICE DAQ ALICE DAQ future detector readout October 29, 2012 CERN.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
A. KlugeFeb 18, 2015 CRU form factor discussion & HLT FPGA processor part II A.Kluge, Feb 18,
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Status of Integration of Busy Box and D-RORC Csaba Soós ALICE week 3 July 2007.
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
Readout Control Unit of the Time Projection Chamber in ALICE Presented by Jørgen Lien, Høgskolen i Bergen / Universitetet i Bergen / CERN Authors: Håvard.
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
P. Vande Vyvre – CERN/PH for the ALICE collaboration CHEP – October 2010.
Scalable Readout System Data Acquisition using LabVIEW Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
András László KFKI Research Institute for Particle and Nuclear Physics New Read-out System of the NA61 Experiment at CERN SPS Zimányi Winter School ‑ 25.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
ALICE Computing Data Challenge VI
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
LHC experiments Requirements and Concepts ALICE
TPC Commissioning: DAQ, ECS aspects
ALICE – First paper.
Status of the Front-End Electronics and DCS for PHOS and TPC
Large CMS GEM with APV & SRS electronics
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
ITS combined test seen from DAQ and ECS F.Carena, J-C.Marin
TPC Electronics Meeting - DDL related Software -
Presentation transcript:

DAQ for Test&Commissioning at Point 2 Klaus Schossmaier CERN PH-AID ALICE TPC Collaboration Meeting Heidelberg, Germany February 2004

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Planning Parameters  System: DAQ system for Test&Commissioning (= LHCC milestone)  Size:12 DDLs for TPC 8 DDLs for others  Place:LHC Point 2  Date: Q  People:ALICE DAQ team

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb DAQ Installation at Point 2 SXL HallSX Hall UX25 (Experimental Area) PX24/CR1 (DAQ) PX24/CR2 (HLT) SR Hall Access Shaft DDL LAN (ALICE sub-detector assembly) (Networking) PX24/CR3 (DCS) PX24/CR4 (Misc.) LAN ACRWR1WR2

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Installation at PX24/CR1 10 LDCs (Local Data Concentrator): rackmount PCs U1, U2, U4 height equipped with 20 D-RORC cards  DDL readout, sub-event building  HLT interface 4 GDCs (Gobal Data Collector): rackmount PCs U1 height equipped with FC cards  full-event building 1 DSS (DAQ Server Services): rackmount PC U4 height hot-swap SCSI disks  RunControl, Logger  On-line Monitoring 1 KVM switch1 FC switch 1 TDS (Transient Data Storage): rackmount disk array U2 height IDE of FC disks  event buffering GE switches PDS (Permanent Data Storage) CERN computer center 20 DDLs DDL patch panels HLT (High Level Trigger) PX24/CR2

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Installation Issues  Hardware/Mechanical issues:  Equipment: rackmount PCs, switches, disk array, …  Rack assembly, DDL installation, cabling, …  Infrastructure: power supply, networking, cooling, …  Software/Testing issues:  Operating System: Linux + customizations  DAQ Software: DATE + configuration  Standalone tests  Integration tests  Reference System in our AID lab:  small, complete DAQ system with 6 DDLs  to spot problems in advance  to tackle problems during operation

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Reference Setup – Front KVM switch: Raritan Paragon UMT2161 4x LDCs: - 1U, 2U, 4U height - dual Xeon - 6x D-RORC cards  DDL 2x GDCs: - 1U height - dual Xeon - Qlogic QLA2310F cards 2x disk arrays: - Infortrend IFT DotHill SANnet II - Fibre Channel DSS: - 4U height - quad Xeon - 3x 36GB SCSI disks L3 rack 2x GB Ethernet switch: 3COM SuperStack 3 Fibre Channel switch: Broacade SilkWorm 3800 DDLs: - to the DDL lab

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Reference Setup - Rear Power distributor Cat5 cables: - Ethernet - KVM - RJ45 connectors Fiber Channel cable: - 2 Gbit/s multimode - LC-LC connectors L3 rack Mounting rails (!)

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb DDL & D-RORC DDL link card  Uniform half-CMC card for both SIU (= Source Interface Unit) and DIU (= Destination Interface Unit)  Gbit/s components (Agilent) for 200 MB/s bandwidth  bridge up to 200 meters DDL patch panel with D-RORC  Two D-RORC types: 1 DDL link card plugged in, 2 integrated DDL channels  PCI 64 bit, 66 MHz  ALTERA FPGA, IP core from PLDA  200 MB/s throughput with DATE

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Rackmount PCs Evaluation/Benchmarking itemLDCGDCDSS Rackmount height1U, 2U, 4U1U4U CPU chipset motherboard dual Xeon 2.4 GHz E7501 Supermicro X5DPE-G2 dual Xeon 2.66 GHz E7501 Supermicro X5DPI-G2 quad Xeon 1.5 GHz ServerWorks GC-HE Intel SRSH4 Main memory2 GB DDR 266 MHz 4 GB DDR 266 MHz Ethernetdual Gigabit Ethernet on-board Gigabit and Fast Ethernet on-board Disk40 GB IDE hot-swap 36 GB SCSI disks PCI slotsup to 6 PCI-X for D-RORC cards 1 PCI-X for Fibre Channel card 6 PCI-X 2 PCI 5V Linux installationRed Hat 7.3  Red Hat 7.3 (rather old)

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Disk Array Systems front view (IFT-6330) rear view (IFT-6330)  Infortrend IFT F2D:  12x 300 GB IDE-based disks  dual 2 Gbit/s Fibre Channel ports  Embedded manager via front panel, RS232C, or IP  2x logical volumes: each one has 1.1 TB, RAID5, with 5 disks  DotHill SANnet II 200:  12x 36.7 GB FC-based disks  dual 2 Gbit/s Fibre Channel ports  RAID configurations via RS232C

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb DAQ Software: DATE  DATE (= Data Acquisition and Test Environment) is the software framework of the ALICE DAQ.  Runs on Linux platforms (e.g. Red Hat 7.3)  Released (latest is version 4.7) and documented  DATE evolves with requirements and technology  Features/Packages:  Dataflow: DDL readout, event building  System configuration and Run control  Monitoring: on-line performance, on/off-line data quality  Utilities: message logger, control/configure front-end electronics, …

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Readout with DATE TPC RCU 3 with SIU D-RORC with DIU DDL  FEC  LDC  Step 0: Hardware is in place  Step 1: Run Linux with the physmem and the rorc driver  Step 2: Install DATE with a general configuration for the LDC  Step 3: Set up configuration file to enter parameters like D-RORC serial number, channel number, page size, checks, etc.  Step 4: Start/Stop the acquisition  Step 5: Analyse the raw data files, e.g. /tmp/run , /tmp/run , …

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb DATE Run Control

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb FEE Configuration/Control TPC RCU 3 with SIU D-RORC with DIU DDL  FEC  LDC  Step 0: Hardware is in place  Step 1: Run Linux with the the rorc driver on the LDC  Step 2: Write a script (ASCII file) for the FeC2 interpreter  Step 3: Run this script, e.g. type FeC2 123 example.fec2 # send command, 19 bit information write_command 0x10f # read status, 19 bit address, 19 bit reply read_and_print 0x100 “Register XYZ: 0x%x” # download data block, 19 bit address, file write_block 0x600 pedestal.hex “%x” # verify data block, 19 bit address, file read_and_check_block 0x600 pedestal.hex “%x” stop_if_failed -1 S/N of D-RORC

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb On-line Monitoring  MOOD Framework:  DATE + ROOT environment  Interfaces to detector code  MOOD Applications:  Visualization, e.g. TPC sector test data (Hall 167)  Data integrity  Detector performance  Execution:  DSS machine

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb DAQ Performance  ALICE Computing Data Challenge:  1750 MB/s event building without recording, 5 days  280 MB/s event building and recording to CASTOR, 7 days  D-RORC readout of one DDL channel with DATE: 206 MB/s readout only 118 MB/s readout with event building

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb DAQ Installations DAQ SystemLocationDDLLDCGDCNetworkingStorage Development / EvaluationAID lab244Gb Ethernetdisk DDL DevelopmentDDL lab210Gb Ethernetdisk Detector Groups (e.g. SDD, HMPID, TOF, Muon, TRD) Institutes110Fast/Gb EthernetDisk TPC Sector Test § PS T10210Fast Ethernetdisk or CASTOR ALICE Data ChallengesCERN IT080 Gb EthernetTDS + CASTOR Reference SetupAID lab642Gb EthernetTDS Test&CommissioningPoint Gb EthernetTDS + CASTOR Final SystemPoint Gb EthernetTDS + CASTOR § Given that the RCU 3 is fully functional in time, see Joachim’s presentation.

DAQ Test&Commissioning at Point 2TPC Collaboration meeting, Feb Summary + Future Work  Objective:  DAQ system for Test&Commissioning at Point 2  Surface tests of ALICE sub-detectors (e.g. TPC sector)  Operational in Q  Installation Layout:  20 DDL links between the PX24/CR1 counting room and the SXL mounting hall  L3 racks: DDL patch panels, 15 rackmount PCs, 1 disk array, switches for Gigabit Ethernet, Fiber Channel and KVM  DAQ Reference System in the AID lab  Future work concerning TPC and DAQ:  Bergen: Development of RCU 3 firmware for the DDL interface  Luciano’s lab: Integration test of RCU 3 with the DDL  TPC sector test at T10 § : DAQ chain with RCU 3 + DDL + DATE