“L1/L2 farm: some thought ” G.Lamanna, R.Fantechi & D. Di Filippo (CERN) Computing WG – 28.9.2011.

Slides:



Advertisements
Similar presentations
LAV trigger primitives Francesco Gonnella Photon-Veto Working Group CERN – 09/12/2014.
Advertisements

LAV contribution to the NA62 trigger Mauro Raggi, LNF ONLINE WG CERN 9/2/2011.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
Vincenzo Vagnoni LHCb Real Time Trigger Challenge Meeting CERN, 24 th February 2005.
29-Aug-154/598N: Computer Networks Switching and Forwarding Outline –Store-and-Forward Switches.
Kien A. Hua Data Systems Lab Division of Computer Science University of Central Florida.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
“ PC  PC Latency measurements” G.Lamanna, R.Fantechi & J.Kroon (CERN) TDAQ WG –
R. Fantechi. DAQ tools My feelings Not dealing with readout stuff No TEL62, no CREAMS, etc But the auxiliary software needed to run the experiment After.
“L1 farm: some naïve consideration” Gianluca Lamanna (CERN) & Riccardo Fantechi (CERN/Pisa)
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
SPSC Questions to the 2014 NA62 Status Report 1 Beam: which is the beam structure expected in October? Which is the intensity you can reasonably expect?
Lets play Jeopardy No, sorry. You got it! Routers LayersMixed up Net Address Protocols
Example: Sorting on Distributed Computing Environment Apr 20,
Securing and Monitoring 10GbE WAN Links Steven Carter Center for Computational Sciences Oak Ridge National Laboratory.
VLVNT Amsterdam 2003 – J. Panman1 DAQ: comparison with an LHC experiment J. Panman CERN VLVNT workshop 7 Oct 2003 Use as example CMS (slides taken from.
Roberto Divià, CERN/ALICE 1 CHEP 2009, Prague, March 2009 The ALICE Online Data Storage System Roberto Divià (CERN), Ulrich Fuchs (CERN), Irina Makhlyueva.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
R. Fantechi. TDAQ commissioning Status report on Infrastructure at the experiment PC farm Run control Network …
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Status of NA62 straw electronics Webs Covers Services Readout.
LKR Working group Introduction R. Fantechi October 27 th, 2009.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
NA62 TDAQ WG Meeting News and usual stuff M. Sozzi CERN – 29/5/2009.
Status of the new NA60 “cluster” Objectives, implementation and utilization NA60 weekly meetings Pedro Martins 03/03/2005.
Installation status Control Room PC farm room DetectorsEB Infrastructure 918 ECN3.
Networking Components WILLIAM NELSON LTEC HUB  Device that operated on Layer 1 of the OSI stack.  All I/O flows out all other ports besides the.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
Computer and Network Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH-LBC RTTC meeting,
Firewalls A brief introduction to firewalls. What does a Firewall do? Firewalls are essential tools in managing and controlling network traffic Firewalls.
“Planning for Dry Run: material for discussion” Gianluca Lamanna (CERN) TDAQ meeting
R. Fantechi. TDAQ commissioning Coordination activity started on January Several items to be attacked already in January Regular meetings: 15/1, 29/1,
Quiz 1 Key 3. Class B 5. |Ethernet Frame|IP Datagram|TCP Header|FTP Header|Data|
R. Fantechi. Shutdown work Refurbishment of transceiver power supplies Work almost finished in Orsay Small crisis 20 days ago due to late delivery of.
SRB data transmission Vito Palladino CERN 2 June 2014.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
Talk board status R. Fantechi, D. Gigi, G.Lamanna TDAQ meeting, Mainz
Status of the NA62 network R. Fantechi 23/5/2012.
TDAQ news and miscellaneous reports M. Sozzi NA62 TDAQ WG meeting CERN – 13/7/2011.
WIRESHARK Lab#3. Computer Network Monitoring  Port Scanning  Keystroke Monitoring  Packet sniffers  takes advantage of “friendly” nature of net. 
TELL1 readout in RICH test: status report Gianluca Lamanna on behalf of TDAQ Pisa Group (B.Angelucci, C.Avanzini, G.Collazuol, S.Galeotti, G.L., G.Magazzu’,
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Status of LAV electronics commissioning Mauro Raggi, Francesco Gonnella Laboratori Nazionali di Frascati 1 Mauro Raggi - Laboratori Nazionali di Frascati4.
R. Fantechi 2/09/2014. Milestone table (7/2014) Week 23/6: L0TP/Torino test at least 2 primitive sources, writing to LTU, choke/error test Week.
NEC Computers SAS - Confidentiel - Feb SAS Technology 1 SAS Technology Auteur : Franck THOMAS.
Software and TDAQ Peter Lichard, Vito Palladino NA62 Collaboration Meeting, Sept Ferrara.
Sumary of the LKr WG R. Fantechi 31/8/2012. SLM readout restart First goal – Test the same configuration as in 2010 (rack TS) – All old power supplies.
“Technical run experience” Gianluca Lamanna (CERN) TDAQ meeting
K + → p + nn The NA62 liquid krypton electromagnetic calorimeter Level 0 trigger V. Bonaiuto (a), A. Fucci (b), G. Paoluzzi (b), A. Salamon (b), G. Salina.
THIS MORNING (Start an) informal discussion to -Clearly identify all open issues, categorize them and build an action plan -Possibly identify (new) contributing.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Data Center Network Architectures
Level-zero trigger status
Ingredients 24 x 1Gbit port switch with 2 x 10 Gbit uplinks  KCHF
Electronics Trigger and DAQ CERN meeting summary.
Angela, Giovanna, Marco, Riccardo
Computing model and data handling
G.Lamanna (CERN) NA62 Collaboration Meeting TDAQ-WG
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Computing WG highlights
Wireshark Lab#3.
The LHCb Event Building Strategy
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
Network Processors for a 1 MHz Trigger-DAQ System
Presentation transcript:

“L1/L2 farm: some thought ” G.Lamanna, R.Fantechi & D. Di Filippo (CERN) Computing WG –

The “Old” online farm L1 = M pc, L2 = N pc Blue Switch  (M+40+1) ports mono-directionals Green switch  (N+M+40+1) ports bi-directionals 40x10 Gb/s Services (DCS, Boot, …) L1 farm L2 farm <50 Gb/s 180 Gb/s 2.5 Gb/s 1 x10 Gb/s CREAM CDR

The “new” online farm Blue switch  (N+M ) bidirectionals ports Number of ports available for computing = N tot(128) – 83 = 35 <50 Gb/s 40x10 Gb/s 180 Gb/s CREAM Services (DCS, Boot, …) L1/2 farm CDR 2.5 Gb/s 1 x10 Gb/s TCP or UDP? Storage farm

Advantages: more efficient reuse of PCs, smaller number of PCs Disavantages: bottleneck, transmission protocol, switch cost and support, flexibility and limited upgradability (up to the switch back plane capability)

Blue switch as before Each PC elaborates one single event (both L1 and L2) instead of a fraction of event No needed of routing among the PCs (no problem on protocol, no latency due to the network, …) <50 Gb/s 40x10 Gb/s 180 Gb/s Services (DCS, Boot, …) CREAM Only final results 2.5 Gb/s 1 x10 Gb/s L1/2 farm CDR 2.5 Gb/s Storage farm

From the PCs to the switch = L1 trigger request for the LKr (peanuts) and final events for CDR (2.5 Gb/s) Max rate per PC: 180 Gb/s / (Number of PC) Multiprocessor PCs: the parallelism is exploited at the PC levels instead of at network level. Routing strategies at TEL62 level similar to the Jonas farm (the round robin with “per burst routing table upgrade” is a little bit more safe) ev: #123 L1 – RICH (fragment) ev: #123 L1 – CHOD (fragment) ev: #123 L1 – MUV (fragment) ev: #123 L2 - merge (Full event) SwitchSwitch SwitchSwitch ev: #123 L1 – CHOD L1 – RICH L1 – MUV … LKr DATA Merging L2 ev: #124 L1 – CHOD L1 – RICH L1 – MUV … LKr DATA Merging L2 ev: #125 L1 – CHOD L1 – RICH L1 – MUV … LKr DATA Merging L2 ev: #126 L1 – CHOD L1 – RICH L1 – MUV … LKr DATA Merging L2

Since the routing is univocal/injective (the PC doesn’t to talk each other) a tree structure can be foreseen: upgradability and smaller switches (cost) Still open the possibility to use 1 Gb instead of 10 Gb SwitchSwitch