The ANTARES Offshore Data Acquisition A Highly Distributed, Embedded and COTS-based System S. Anvar, H. Le Provost, F. Louis – CEA Saclay DAPNIA.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

1 UNIT I (Contd..) High-Speed LANs. 2 Introduction Fast Ethernet and Gigabit Ethernet Fast Ethernet and Gigabit Ethernet Fibre Channel Fibre Channel High-speed.
Addressing the Software and Hardware Trade-Offs of an Embedded Distributed System The Case of the ANTARES Data Acquisition and Detector Control S. Anvar,
XFEL 2D Pixel Clock and Control System Train Builder Meeting, DESY 4 December 2008 Martin Postranecky Matt Warren, Matthew Wing.
XFEL C+C HARDWARE : REQUIREMENTS 1) To receive, process and store Timing Signals from TR ( Timing Receiver ) in same crate : - 5 MHz Bunch CLOCK - Bunch.
XFEL 2D Pixel Clock and Control System Train Builder Meeting, DESY 18 February 2010 Martin Postranecky, Matt Warren, Matthew Wing.
XFEL 2D Pixel Clock and Control System Train Builder Meeting, DESY 22 October 2009 Martin Postranecky, Matt Warren, Matthew Wing.
Nios Multi Processor Ethernet Embedded Platform Final Presentation
1 Real-time Linux Evaluation Kalynnda Berens, GRC
IT253: Computer Organization
Presenter : Cheng-Ta Wu Kenichiro Anjo, Member, IEEE, Atsushi Okamura, and Masato Motomura IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 39,NO. 5, MAY 2004.
CDR Feb 2006Mar 2008Dec 2009 Mar Design Study Preparatory Phase TDR tendering construction data taking governance legal entity pre-production.
1 Chapter Overview Network Cables Network Interface Adapters Network Hubs.
© DEEDS – OS Course WS11/12 Lecture 10 - Multiprocessing Support 1 Administrative Issues  Exam date candidates  CW 7 * Feb 14th (Tue): * Feb 16th.
Chapter 3 General-Purpose Processors: Software
Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
International Workshop on Satellite Based Traffic Measurement Berlin, Germany September 9th and 10th 2002 TECHNISCHE UNIVERSITÄT DRESDEN Onboard Computer.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Development of novel R/O electronics for LAr detectors Max Hess Controller ADC Data Reduction Ethernet 10/100Mbit Host Detector typical block.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Introduction to Computers Personal Computing 10. What is a computer? Electronic device Performs instructions in a program Performs four functions –Accepts.
1.  Project Goals.  Project System Overview.  System Architecture.  Data Flow.  System Inputs.  System Outputs.  Rates.  Real Time Performance.
S1.6 Requirements: KnightSat C&DH RequirementSourceVerification Source Document Test/Analysis Number S1.6-1Provide reliable, real-time access and control.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
An Intelligent and Adaptable Grid-Based Flood Monitoring and Warning System Phil Greenwood.
Understanding Data Acquisition System for N- XYTER.
Data acquisition system for the Baikal-GVD neutrino telescope Denis Kuleshov Valday, February 3, 2015.
LNL 1 SLOW CONTROLS FOR CMS DRIFT TUBE CHAMBERS M. Bellato, L. Castellani INFN Sezione di Padova.
11th March 2008AIDA FEE Report1 AIDA Front end electronics Report February 2008.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Disk WP-4 “Information Technology” J. Hogenbirk/M. de Jong  Introduction (‘Antares biased’)  Design considerations  Recent developments  Summary.
Stores the OS/data currently in use and software currently in use Memory Unit 21.
L/O/G/O Input Output Chapter 4 CS.216 Computer Architecture and Organization.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
DABCDABC D ata A cquisition B ackbone C ore J.Adamczewski, H.G.Essel, N.Kurz, S.Linev 1 Work supported by EU RP6 project.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Disk Towards a conceptual design M. de Jong  Introduction  Design considerations  Design concepts  Summary.
System Frame and Protocols
ARCHITECTURE. PRR November x 32 PADs Up to 26 or 3 x 17 MANU BOARD. PATCH BUS Translator Board. FEE DETECTOR Up to 100 PATCH BUS per detector. MANU.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Online Control and Configuration KM3NeT Design. KM3NeT Detection Units VLVnT2011 S. Anvar, CEA Irfu Saclay – KM3NeT 2.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
S.Anvar, V.Gautard, H.Le Provost, F.Louis, K.Menager, Y.Moudden, B.Vallage, E.Zonca, on behalf of the KM3NeT consortium 1 IRFU/SEDI-CEA Saclay F
CLAS12 Central Detector Meeting, Saclay, 3 Dec MVT Read-Out Architecture & MVT / SVT Integration Issues Irakli MANDJAVIDZE.
KM3NeT Offshore Readout System On Chip A highly integrated system using FPGA COTS S. Anvar, H. Le Provost, F. Louis, B.Vallage – CEA Saclay IRFU – Amsterdam/NIKHEF,
The sector of the Antares line to be deployed in the NEMO site Davide Piombo – INFN sez. Genova
System On Chip Offshore Node S. Anvar, H. Le Provost, Y.Moudden, F. Louis, B.Vallage, E.Zonca – CEA Saclay IRFU – Amsterdam/NIKHEF, 2010 July 5.
Generic and Re-usable Developments for Online Software Slow Control, Configuration, Data Format & Online Processing Shebli Anvar, CEA Irfu January 12,
M. Circella – ANTARES connectionINFN Comm. II, 5/6/2008 The ANTARES Connection Marco Circella --- INFN Bari 1.
Firmware and Software for the PPM DU S. Anvar, H. Le Provost, Y.Moudden, F. Louis, E.Zonca – CEA Saclay IRFU – Amsterdam/NIKHEF, 2011 March 30.
PXD DAQ News S. Lange (Univ. Gießen) Belle II Trigger/DAQ Meeting (Jan 16-18, 2012, Hawaii, USA) Today: only topics important for CDAQ - GbE Connection.
IRFU The ANTARES Data Acquisition System S. Anvar, F. Druillole, H. Le Provost, F. Louis, B. Vallage (CEA) ACTAR Workshop, 2008 June 10.
MADEIRA Valencia report V. Stankova, C. Lacasta, V. Linhart Ljubljana meeting February 2009.
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
Deep-sea neutrino telescopes
DAQ read out system Status Report
“FPGA shore station demonstrator for KM3NeT”
Electronics Trigger and DAQ CERN meeting summary.
Data Aquisition System
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Controlling a large CPU farm using industrial tools
VELO readout On detector electronics Off detector electronics to DAQ
Characteristics of Reconfigurable Hardware
Network Processors for a 1 MHz Trigger-DAQ System
Cluster Computers.
Presentation transcript:

The ANTARES Offshore Data Acquisition A Highly Distributed, Embedded and COTS-based System S. Anvar, H. Le Provost, F. Louis – CEA Saclay DAPNIA

A Neutrino Telescope

The "0.1 km 2 " Project 13 Strings 30 Detecting Nodes Per String (every 10m) 3 Optical Modules (PMs) Per Detecting Node 2 "ARS" Digitizing Chips Per Optical Module Offshore Data Acquisition System: Readout in Real Time 2340 ARS Chips Spread Out Over m 2000m Underwater Onshore: Perform Real Time Trigger Computations On Incoming Data Farm of 100 PC-Linux Workstations

Detector Topology To Shore String Control Module ( SCM ) Local Control Module ( LCM ) Junction box ( JB ) Electro Mechanical cable ( EMC ) Electro Optical Cable ( EOC ) Optical Module ( OM ) Slow-Control Data Clock Energy

In Each LCM: One DAQ Board ARS 1 Séquenceur Arbitrage Bus MPC860 DAQ Memory Controller Interface Bus MPC860 Données Adresses COTS Processor MPC860 max 80MHz Max P: ~700 mW Integrated: 4 Serial 1 Ethernet 101 DAQ Memory Event Type 1 Event type 2 Event type 3 System Communications Conversion serial // Process Unit Data reduction Time reordering CRC Mémoire ARS Contrôle Mémoire Lecture ARS 2 ARS 3 ARS 4 ARS 5 ARS 6 RARS 110 Ethernet ARS Memory Supervisor Control & Status Register Write Memory controller ARS Memory ReadOut Memory controller MPC860 Bus Manager MPC860 Bus Interface Addresses Data DAQ board LCM Slow-Control System Memory Status ARS Memory Status Self-test

Network Topology (1) Cable rigidity Cable diameter Connection complexity SCM Each LCM has its own I/O data flow (Slow Control & Physics Data) Sectors S S S S M S S SCM S S M To Shore (DWDM)

Used as data concentrator many input ports one output port Network Topology (2) PC ON-SHORE SWITCH: (in: 78x1000, out: 100x1000) SECTOR SWITCH (in: 5x100, out: 1x1000) LCM risks of congestion intelligent flow control in LCM ~15 Mb/s

Massively Distributed System Final System: A Network of ~500 Nodes –~400 Offshore Nodes (MPC860 + RTOS) –~100 Onshore Nodes (PC + Linux) 3 Communicating Parallel Applications –DAQ –Trigger –Slow Control

Software Technologies Programming Languages –C++ For Fast Software Components –Java for Non Real-Time (not yet mature for RT) Design & Development –Object Paradigm –UML (Unified Modeling Language) + Extensions –Trigger/DAQ Design Patterns (Functional/Distribution Separation of Concerns) –Specific Development Methodology for Distributed Systems (MORDICUS)

Distribution Design Patterns LCM PC CASCADED SWITCHES Performance Parallelism (Scalability) Intrinsic Parallelism (Progressive deployment)

acq (in frameID:int) accept (in f:Frame) :ARS_Read :Storage [TRIG = OK] Sequence Diagram (Functional) :Frame Formatter [TRIG = OK] dump() :Trigger:EventBuilder store() [TRIG OK] trig (e:Event)

Class Diagram (Functional) Trigger +trig(in d:Event) : bool 1 FrameEvent +rawParts 0..n ARS_Read +fillBuffers(id:int) processor +builder 1 n +storage Data 1 +formatter1 6 +accept(in x: Frame) #dump(in x:Frame) EventBuilder FrameFormatter +acq(in frameID:int) Storage +store(in d:Event)

> ALTERA20K readout Interrupt - SharedMem 12 PowerPC-RTOS framer > PC-Linux processor > CORBA ** TriggerFrameFormatterStorageARS_Read Specialized Deployment Diagram EventBuilder >

1 +builder +acq(in frameID:int) FrameFormatter > EventBuilderInterface +accept(in x:Frame) Adapts FrameFormatter EventBuilder (Remote + Farm) Communication on Caller Side Implements Farm Management POLICY Automatic Model Transformation 1 > EventBuilderProxy +accept(in x:Frame) > EventBuilder +accept(in x:Frame) 11

New Challenge in HEP Real-Time Computing Massively Distributed –400 Offshore Processors –100 Onshore Processors Deeply Embedded –Heat Dissipation –Limited Space –Limited Power Numerous Modules: MTBF Problem –400 "Satellites" With 1/10 th of Budget –Robustness is Critical DESIGN & DEVELOPMENT METHODOLOGY