3rd April 2001A.Polini and C.Youngman1 GTT status Items reviewed: –Results of GTT tests with 3 MVD-ADC crates. Aims Hardware and software setup used Credit.

Slides:



Advertisements
Similar presentations
System Integration and Performance
Advertisements

G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
GWDAW 16/12/2004 Inspiral analysis of the Virgo commissioning run 4 Leone B. Bosi VIRGO coalescing binaries group on behalf of the VIRGO collaboration.
Final Commissioning Plan and Schedule Cheng-Ju Lin Fermilab Installation Readiness Review 09/27/2004 Outline: - Work to be done during shutdown - Work.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) SriramGopinath( )
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
The Publisher-Subscriber Interface Timm Morten Steinbeck, KIP, University Heidelberg Timm Morten Steinbeck Technical Computer Science Kirchhoff Institute.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Timm M. Steinbeck HLT Data Transport Framework.
12-Sep-2005C.Youngman / GTT group1 An introduction to the GTT Topics covered:  Why a GTT?  Adding the GTT to the ZEUS trigger.  Interfacing data sources.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
ABC Co. Network Implementation High reliability is primary concern – near 100% uptime required –Customer SLA has stiff penalty clauses –Everything is designed.
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
Time measurement of network data transfer R. Fantechi, G. Lamanna 25/5/2011.
1 CLEO PAC 11/March/00 M. Selen, University of Illinois CLEO-III Trigger & DAQ Status Trigger Illinois (Cornell) DAQ OSU Caltech Cornell.
Jani Pousi Supervisor: Jukka Manner Espoo,
5 Feb 2002Alternative Ideas for the CALICE Backend System 1 Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University.
TAKING CARE GUIDELINES Sub-title Place, Month Year.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
Emulator System for OTMB Firmware Development for Post-LS1 and Beyond Aysen Tatarinov Texas A&M University US CMS Endcap Muon Collaboration Meeting October.
Data Acquisition Backbone Core DABC J. Adamczewski, H.G. Essel, N. Kurz, S. Linev GSI, Darmstadt The new Facility for Antiproton and Ion Research at GSI.
Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges.
Vladimir Frolov for Torino group. Experimental activities: The system for testing the MRPC in the Torino INFN laboratory has been fully mounted and checked;
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Jan 3, 2001Brian A Cole Page #1 EvB 2002 Major Categories of issues/work Performance (event rate) Hardware  Next generation of PCs  Network upgrade Control.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
SNS Integrated Control System SNS Timing Master LA-UR Eric Bjorklund.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
Wir schaffen Wissen – heute für morgen Gateway (Redux) PSI - GFA Controls IT Alain Bertrand Renata Krempaska, Hubert Lutz, Matteo Provenzano, Dirk Zimoch.
ZEUS MVD and GTT Group: ANL, Bonn Univ., DESY-Hamburg -Zeuthen, Hamburg Univ., KEK- Japan, NIKHEF, Oxford Univ., Bologna, Firenze, Padova, Torino Univ.
David Abbott - JLAB DAQ group Embedded-Linux Readout Controllers (Hardware Evaluation)
Status of Global Trigger Global Muon Trigger Sept 2001 Vienna CMS-group presented by A.Taurok.
17-Aug-00 L.RistoriCDF Trigger Workshop1 SVT: current hardware status CRNowFinal Hit Finders64242 Mergers31616 Sequencers2312 AMboards4624 Hit Buffers21212.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
DEPARTEMENT DE PHYSIQUE NUCLEAIRE ET CORPUSCULAIRE JRA1 Parallel - DAQ Status, Emlyn Corrin, 8 Oct 2007 EUDET Annual Meeting, Palaiseau, Paris DAQ Status.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
L3 DAQ Doug Chapin for the L3DAQ group DAQShifters Meeting 10 Sep 2002 Overview of L3 DAQ uMon l3xqt l3xmon.
IPHC - DRS Gilles CLAUS 04/04/20061/20 EUDET JRA1 Meeting, April 2006 MAPS Test & DAQ Strasbourg OUTLINE Summary of MimoStar 2 Workshop CCMOS DAQ Status.
Configuration Mapper Sonja Vrcic Socorro,
09/01/2016James Leaver SLINK Current Progress. 09/01/2016James Leaver Hardware Setup Slink Receiver Generic PCI Card Slink Transmitter Transition Card.
Online monitor for L2 CAL upgrade Giorgio Cortiana Outline: Hardware Monitoring New Clusters Monitoring
GLAST LAT Project CU Beam Test Workshop 3/20/2006 C. Sgro’, L. Baldini, J. Bregeon1 Glast LAT Calibration Unit Beam Test Status Report on Online Monitor.
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
TELL1 command line tools Guido Haefeli EPFL, Lausanne Tutorial for TELL1 users : 25.February
Hardware, 010 – Revision notes Scales use Controller board O/Ps for Feed Control (FCE’s) Flow meters use their own on board O/Ps for Feed Control (FCE’s)
October Test Beam DAQ. Framework sketch Only DAQs subprograms works during spills Each subprogram produces an output each spill Each dependant subprogram.
Slide 1 2/22/2016 Policy-Based Management With SNMP SNMPCONF Working Group - Interim Meeting May 2000 Jon Saperia.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
CWG13: Ideas and discussion about the online part of the prototype P. Hristov, 11/04/2014.
FW and HW Status TDAQ WG 10/6/2015. Hardware MPOD: ◦ HV was NOT floating   Sensibility of current limit to external devices  Particularly for chamber.
Event Management. EMU Graham Heyes April Overview Background Requirements Solution Status.
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
ATLAS SCT/Pixel Off Detector Workshop, UCL, 15 June ROD Test Stand Lukas Tomasek LBL
The NA62RunControl: Status update Nicolas Lurkin School of Physics and Astronomy, University of Birmingham NA62 TDAQ Meeting – CERN, 10/06/2015.
R. Fantechi 2/09/2014. Milestone table (7/2014) Week 23/6: L0TP/Torino test at least 2 primitive sources, writing to LTU, choke/error test Week.
DAQ 1000 Tonko Ljubicic, Mike LeVine, Bob Scheetz, John Hammond, Danny Padrazo, Fred Bieser, Jeff Landgraf.
Javier Argomedo (ESO/DoE/CSE) - Instrument Control Systems 2014 E-ELT M1 Local Control System Network and LCU Prototyping Motivation Requirements Design.
PC-based L0TP Status Report “on behalf of the Ferrara L0TP Group” Ilaria Neri University of Ferrara and INFN - Italy Ferrara, September 02, 2014.
© Airspan Networks Inc. Automatic QoS Testing over IEEE Standard.
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Time Gathering Systems Secure Data Collection for IBM System i Server
L2 CPUs and DAQ Interface: Progress and Timeline
Network Processors for a 1 MHz Trigger-DAQ System
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Configuration DB Status report Lana Abadie
Presentation transcript:

3rd April 2001A.Polini and C.Youngman1 GTT status Items reviewed: –Results of GTT tests with 3 MVD-ADC crates. Aims Hardware and software setup used Credit control implementation Latency and rate results - test 1 Latency and rate results - test 2 System acceptability –GTT time schedule and requirements on ZEUS DAQ.

3rd April 2001A.Polini and C.Youngman2 Results of tests with 3 MVD-ADC crates Aims –Implement the credit controlled GTT described in ZEUS Note with data input from 3 MVD-ADC crates. –Measure latency and rate characteristics in the absence of algorithm data processing in week available. –Make a statement about the acceptability of the system w.r.t. the ZEUS DAQ. Hardware and software setup used –parts of the MVD test system as of 23/3/01 3 complete and fully debugged ADC/readout crates. Clock&Control system incl. Pulse generator ext. trigger. –7 Dual 1GHz Intel PCs –1 Intel 12 port giga/fast ethernet switch. PPC VME (fast) and PC(giga) CPUs connected to separate ports –MVD run control and monitoring software.

3rd April 2001A.Polini and C.Youngman3 GTT credit control implementation keypoints –code derived from a non hardware simulation allows RC configurations to be tested allows message and IP connectivity to be debugged –uses a common function library for buffering ordering of events with GFLT# before sending to GSLT/EVB gathering complete events at GTT and EVB –network messages sent consists of: an short control message (<128B) in XDR tagged union. –endian independent –contents (GFLT#,etc.) remove the need to look into the data and, optionally, a data message. –all programs are currently single threaded.

3rd April 2001A.Polini and C.Youngman GFLT ACCEPT GSLT DECISION EVB GSLT MVD-ADC STT-SLTCTD-SLT GTT TOEVBTOGSLTFROMGSLT /61 12 GTT process, network and trigger connections - credit based.

3rd April 2001A.Polini and C.Youngman5 Control message definitions Credit –0. credit number allocation (SETUP) –1. credit/socket resolution (SETUP) –2. credit list (SETUP) and credit notification (ACTIVE) Trigger –3. SLT data to GTT (ACTIVE) –4. GTT end algorithm credit notification (ACTIVE) –5. GTT algorithm result to GSLT (ACTIVE) –6. GSLT decision to MVD and specific GTT (ACTIVE) EVB –7. GTT result and MVD cluster data (ACTIVE) –8. MVD strip data (ACTIVE)

3rd April 2001A.Polini and C.Youngman GFLT ACCEPT GSLT DECISION EVBGSLT MVD-ADC STT-SLTCTD-SLT GTT TOEVBTOGSLTFROMGSLT /61 12 GTT hardware realization

3rd April 2001A.Polini and C.Youngman7 Latency and rate results - test 1 Fix SLT & EVB data size/crate and vary N GTT –measurement parameters SLT size/crate 2.6, 2.7 and 1.9 kB EVB size/crate 4.6, 4.6 and 3.3 kB vary N GTT through: 1,2,4,8,12,18,24 and 30 (ie. 5/PC). Pulse generator rate 600 Hz. GSLT accept prescale factor 10 –results are independent of N GTT Sustained rate 500 Hz mean with 60 Hz p2p fluctuation. Latency at TOGSLT 1.7ms with low level but long tail. Latency at TOEVB 2.9ms - did not look at tail. –conclusions credit turn round of 1.7 ms –message transit time ~0.9ms (cf. Table 4 TCP PPC->PC performance tests = 3/ /12306) N GTT independence surprising especially for 1 GTT, understood ? rate and fluctuation not understood, credit bursting ?. latency tails not understood, more detailed tests required.

3rd April 2001A.Polini and C.Youngman8 GTT latency at TOGSLT

3rd April 2001A.Polini and C.Youngman9 Latency and rate results - test 2 Fix N GTT and vary SLT & EVB data size/crate –measurement parameters pulse generator rate600 Hz GSLT accept prescale factor 10 N GTT fixed at 6, ie. 1/PC –results –conclusions bigger SLT/EVB data sizes have lower rates and higher latencies ! too early to say how stable the measured values are.

3rd April 2001A.Polini and C.Youngman10 Acceptability of GTT performance Non algorithm processing results –3 MVD-ADC crate system the measured rates (>500Hz) for SLT and EVB data sizes of <2.5kB/crate and <4.5kB/crate are close to the ZEUS DAQ requirement of ~550Hz. the measured SLT and EVB latencies appear to be stable and acceptable if the tails can be understood and reduced further. –CTD-SLT connection The measured latency of the CTD-SLT at the GSLT was measured in the ZEUS DAQ system in The CTD-SLT data is available a few ms after the GFLT accept thus network contention at the GTT is not anticipated. Conclusions –The measurements indicate that the GTT will work.

3rd April 2001A.Polini and C.Youngman11 Time schedule and ZEUS DAQ requirements MVD related –week 17 (23/4) ? GFLT trigger interface to C&C EVB related –week 18 (30/4) ? tests of PC/PCI-TP interface needed for sending SLT result receiving GSLT decision –week 19 (7/5) ? tests of MVD event building via ethernet –by week 19 (7/5) DDL definitions of data banks GSLT related –by week 19 (7/5) definition of GTT decision –week 19 (7/5) ? tests GSLT connection DAQ chain –week 20 (14/5) and thereafter tests with full DAQ system including the new EVB subsystems.