Worldwide event filter processing for calibration Calorimeter Calibration Workshop Sander Klous September 2006.

Slides:



Advertisements
Similar presentations
G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
Advertisements

Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
Valeria Perez Reale University of Bern SM Higgs Discovery Channels ATLAS High Level Trigger Trigger & Physics Selection Higgs Channels: Physics Performance.
Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
1 Introduction to Geneva ATLAS High Level Trigger Activities Xin Wu Journée de réflexion du DPNC, 11 septembre, 2007 Participants Assitant(e)s: Gauthier.
Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events.
TCP and ATLAS T/DAQ Dec 2002 R. Hughes-Jones Manchester TCP/IP and ATLAS T/DAQ With help from: Richard, HansPeter, Bob, & …
Slide: 1 Richard Hughes-Jones T2UK, October 06 R. Hughes-Jones Manchester 1 Update on Remote Real-Time Computing Farms For ATLAS Trigger DAQ. Richard Hughes-Jones.
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
1 The ATLAS Online High Level Trigger Framework: Experience reusing Offline Software Components in the ATLAS Trigger Werner Wiedenmann University of Wisconsin,
Virtual Organization Approach for Running HEP Applications in Grid Environment Łukasz Skitał 1, Łukasz Dutka 1, Renata Słota 2, Krzysztof Korcyl 3, Maciej.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
1 Chasing the Higgs boson with a worldwide distributed trigger system Sander Klous NIKHEF VENI proposal 2006.
Algorithm / Data-flow Interface
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Calibration streams in the Event Filter. Status report Mainz, Thursday 13 October 2005 Sander Klous – NIKHEF On behalf of the EF calibration team: Martine.
 Carlos A. Chavez Barajas 22nd, May 2013 Latest work : Data Analysis Non-collision background estimation (SUSY searches) W+Jets cross section (data driven.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Remote Online Farms Sander Klous
TAU entered in various places. Manpower – Yan Benhammou++ (N. Guttman, N. Hod, E. Reinherz, A. Kreizel). Working on online Monitoring of TGC. Looking at.
Greedy Applications Ethernet Everywhere Real Time Networks for the Atlas Experiment Brian Martin CERN In memory of Bob Dobinson And on behalf of a large.
Network Performance for ATLAS Real-Time Remote Computing Farm Study Alberta, CERN Cracow, Manchester, NBI MOTIVATION Several experiments, including ATLAS.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Slide: 1 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 1 Investigating the Network Performance of Remote Real-Time.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Prospects for the use of remote real time computing over long distances in the ATLAS Trigger/DAQ system R. W. Dobinson (CERN), J. Hansen (NBI), K. Korcyl.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
Geneva – Kraków network measurements for the ATLAS Real-Time Remote Computing Farm Studies R. Hughes-Jones (Univ. of Manchester), K. Korcyl (IFJ-PAN),
LHC Physics Analysis and Databases or: “How to discover the Higgs Boson inside a database” Maaike Limper.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
U.S. ATLAS Executive Committee August 3, 2005 U.S. ATLAS TDAQ FY06 M&O Planning A.J. Lankford UC Irvine.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester 1 ATLAS TDAQ Networking, Remote Compute Farms & Evaluating SFOs Richard Hughes-Jones The.
Routing and Streaming in the HLT TDAQ Week Sander Klous Wednesday, May 17, 2006.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
Interactive European Grid Environment for HEP Application with Real Time Requirements Lukasz Dutka 1, Krzysztof Korcyl 2, Krzysztof Zielinski 1,3, Jacek.
M. Gilchriese Basic Trigger Rates December 3, 2004.
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
ATLAS TDAQ RoI Builder and the Level 2 Supervisor system R. E. Blair, J. Dawson, G. Drake, W. Haberichter, J. Schlereth, M. Abolins, Y. Ermoline, B. G.
Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition.
GridPP Meeting Jan 2003 R. Hughes-Jones Manchester ATLAS Trigger/DAQ Real-time use of the Grid Network Richard Hughes-Jones The University of Manchester.
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
ATLAS RoI Builder + CDF ● Brief reminder of ATLAS Level 2 ● Opportunities for CDF (what fits - what doesn't) ● Timescales (more of what fits and what doesn't)
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
ATLAS Computing Model & Service Challenges Roger Jones 12 th October 2004 CERN.
ATLAS – wstępna selekcja przypadków LHC dr hab.Krzysztof Korcyl zakład XIV eksperymentu ATLAS.
Initial Planning towards The Full Dress Rehearsal Michael Ernst.
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
1 Nicoletta GarelliCPPM, 03/25/2011 Overview of the ATLAS Data-Acquisition System o perating with proton-proton collisions Nicoletta Garelli (CERN) CPPM,
NIPHAD TDAQ overview. Part 2: Getting ready New machine schedule – close vacuum pipe system end of Aug 07 Will remind you of our key preparation.
Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Computer Instrumentation Introduction Jos Vermeulen, UvA / NIKHEF Topical.
Skimming in Zztop Ricardo – SLT meeting.
5/14/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
U.S. ATLAS TDAQ FY06 M&O Planning
Operating the ATLAS Data-Flow System with the First LHC Collisions
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
12/3/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
Remote Online Farms TDAQ Sander Klous ACAT April
1/2/2019 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
Presentation transcript:

Worldwide event filter processing for calibration Calorimeter Calibration Workshop Sander Klous September 2006

2 Long existing idea: remote computing 2004 ROB L2PU SFI PF Local Event Processing Farms Data Collection Network North Area ATLAS Detectors – Level 1 Trigger Back End Network SFOs mass storage NORTH AREA Bdlg. 513 Copenhagen Edmonton Krakow Manchester PF Remote Event Processing Farms PF Packet Switched WAN: GEANT Switch lightpath PF The “Magni” Cluster Bob Dobinson, Krzysztof Korcyl, Catalin Meirosu, John Renner-Hansen and Jim Pinfold, Richard Hughes-Jones

3 Dataflow detail Remote farm EF conc.

4 Input ExtPT Trash Output EFD B C D LVL2 RoiB L2sv L2pu Ros/RobinpRos DFM SFI (partially) build SFO Stream selection StreamTag scenarios 2006 Stream n Stream 1Stream 2 Ath / CALid Ath / PESA PT From LVL1 Event LVL1 Info Add to StreamTag1 Stripping Ath / CalStr Ath / CALIB PT Add to RoutingTag Create RoutingTag Routing Duplicating Routing (Partial) Event LVL1 Info RoutingTag StreamTag 1 Event LVL1 Info RoutingTag StreamTag 2 Event LVL1 Info RoutingTag StreamTag 2 (Partial) Event LVL1 Info RoutingTag StreamTag (Partial) Event LVL1 Info RoutingTag StreamTag 1 Add to StreamTag2 Create StreamTag ATL-DQ-ES-0076  In preparation  See Sushkov presentation Remote

5 Combining the two concepts Streaming/routing allows to prioritize the data  E.g. calibration events handled by remote farms Phase 1 Remote farm EF conc. Mass storage Bypass No file I/O

6 Advantages of this approach Technical Simpler, less problems with QoS of remote farms Avoid processing bottleneck in Event Filter Avoid File I/O bottleneck in SFOs (320 MB/s) New bottleneck: network limitation, but…  Already network capacity exceeds local storage and processing  Network bandwidth grows much faster than I/O or CPU Organizational Complies with policy of TDAQ and Streaming WG  Calibration events are allowed to be processed on remote farms  Calibration events are allowed to overlap with physics events

7 Use cases Online detector calibration Access to offline compute power for online calibration  No compute power available in the pit  No space left to put extra compute power in the pit Testing of new and/or more complicated calibration triggers Wild ideas can be tested in parallel to standard HLT operation My personal favorite: Jet Energy Scale calibration with fully hadronic ttbar events (

8 Dreaming about a better future Increasing the SFI event building rate Phase 1 Phase 2 Remote farm EF conc. Mass storage