ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.

Slides:



Advertisements
Similar presentations
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Advertisements

December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Wahid Bhimji University of Edinburgh J. Cranshaw, P. van Gemmeren, D. Malon, R. D. Schaffer, and I. Vukotic On behalf of the ATLAS collaboration CHEP 2012.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Status of CMS Matthew Nguyen Recontres LCG-France December 1 st, 2014 *Mostly based on information from CMS Offline & Computing Week November 3-7.
Claudio Grandi INFN Bologna CMS Operations Update Ian Fisk, Claudio Grandi 1.
SuperBelle Collaboration Meeting December 2008 Martin Sevior University of Melbourne Computing Model for SuperBelle Outline Scale and Motivation Definitions.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
PhysX CoE: LHC Data-intensive workflows and data- management Wahid Bhimji, Pete Clarke, Andrew Washbrook – Edinburgh And other CoE WP4 people…
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
EGI-InSPIRE EGI-InSPIRE RI DDM solutions for disk space resource optimization Fernando H. Barreiro Megino (CERN-IT Experiment Support)
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ATLAS Distributed Computing in LHC Run2
Dynamic Data Placement: the ATLAS model Simone Campana (IT-SDC)
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
Mini-Workshop on multi-core joint project Peter van Gemmeren (ANL) I/O challenges for HEP applications on multi-core processors An ATLAS Perspective.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
LHCbComputing LHCb computing model in Run1 & Run2 Concezio Bozzi Bologna, Feb 19 th 2015.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
THE ATLAS COMPUTING MODEL Sahal Yacoob UKZN On behalf of the ATLAS collaboration.
Data Formats and Impact on Federated Access
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Virtualization and Clouds ATLAS position
Simone Campana CERN-IT
SuperB and its computing requirements
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Workshop Computing Models status and perspectives
for the Offline and Computing groups
ATLAS activities in the IT cloud in April 2008
Vanderbilt Tier 2 Project
The Data Lifetime model
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Issues with Simulating High Luminosities in ATLAS
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
US ATLAS Physics & Computing
R. Graciani for LHCb Mumbay, Feb 2006
ATLAS DC2 & Continuous production
The ATLAS Computing Model
Presentation transcript:

ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC

The Challenges of Run2  LHC operation  Trigger rate 1 kHz (~400)  Pile-up up above 30 (~20)  25 ns bunch spacing (~50)  Centre-of-mass energy x ~2  Different detector  Constraints of ‘flat budget’  Both for hardware and for operation and development  Data from Run1 8 July 2014 – WLCG Workshop 2

Where to optimize? CPU Consumption Disk usage at T1s & T2s  Simulation  CPU  Reconstruction  CPU, Memory  Analysis  CPU, Disk Space 8 July 2014 – WLCG Workshop 3

Simulation  Simulation is CPU intensive  Integrated Simulation Framework (ISF)  Mixing of full GEANT & fast simulation within an event  Baseline for MC production  More events per 12h job, larger output files, less transfers/merging, less I/O  Or shorter, more granular jobs for opportunistic resources 8 July 2014 – WLCG Workshop 4

Reconstruction  Reconstruction is memory eager  And requires non negligible CPU  AthenaMP default from 2014  (Almost) all production run on multicore  Analysis on single core  Optimization in code and algorithms 8 July 2014 – WLCG Workshop 5

Analysis Model  Common analysis data format : xAOD  replacement of AOD & group ntuple of any kind  Readable both by Athena & ROOT  Data reduction framework  AthenaMP to produce group data sample Centrally via Prodsys  Based on train model one input, N outputs from PB to TB 8 July 2014 – WLCG Workshop 6 Chaotic Organized (trains) Organized (reprocessing)

Computing Model: data processing  Optional extension of first pass processing from T0 to T1s  More flexible usage of resources  Reduce the different between T1s and T2s in the model  Loosen the job-to-data colocation  Still one/two full reprocessing from RAW / year, but multiple AOD2AOD reprocessings/year  Derivation Framework (train model) for analysis datasets Some T2s are equivalent to T1s in term of disk storage & CPU power Need More Flexibility 8 July 2014 – WLCG Workshop 7

Computing Model: Run-2 data placement  Dynamic data placement and reduction of secondary (cache) replicas will continue  More data will be stored on tape beside disk  Only popular primary (pinned) dataset will be kept on disk after some time  All datasets will have a lifetime (from both disk and tape)  Access to data on tape still “organized” and not “chaotic” 8 July 2014 – WLCG Workshop 8 T0/T1 DATADISK pinned cache T2 DATADISK

Preparation for Run-2  Alessandro will now present the work being done in preparation for Run-2  We will have a 3 days Jamboree (Dec 3-5) at CERN focused on sites related aspects  Placeholder agenda at 8 July 2014 – WLCG Workshop 9