12.6.2012 ATLAS in 2012 - LHCC 1 2012 - report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.

Slides:



Advertisements
Similar presentations
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Advertisements

S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
Status of CMS Matthew Nguyen Recontres LCG-France December 1 st, 2014 *Mostly based on information from CMS Offline & Computing Week November 3-7.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
OSG Area Coordinator’s Report: Workload Management February 9 th, 2011 Maxim Potekhin BNL
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Status of 2015 pledges 2016 requests RRB Report Concezio Bozzi INFN Ferrara LHCb NCB, November 3 rd 2014.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
H->bb Weekly Meeting Ricardo Gonçalo (RHUL) HSG5 H->bb Weekly Meeting, 26 April 2011.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
U.S. ATLAS S&C Planning Meeting - June ATLAS Software Infrastructure : Requirements and Goals at Run 2 Period Alex Undrus.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
1 Evolving ATLAS Computing Model and Requirements Michael Ernst, BNL With slides from Borut Kersevan and Karsten Koeneke U.S. ATLAS Distributed Facilities.
Hadoop IT Services Hadoop Users Forum CERN October 7 th,2015 CERN IT-D*
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Ian Bird, CERN WLCG LHCC Referee Meeting 1 st December 2015 LHCC; 1st Dec 2015 Ian Bird; CERN1.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
OSG Area Coordinator’s Report: Workload Management February 9 th, 2011 Maxim Potekhin BNL
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
Update on CHEP from the Computing Speaker Committee G. Carlino (INFN Napoli) on behalf of the CSC ICB, October
Grid Computing 4 th FCPPL Workshop Gang Chen & Eric Lançon.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
Maria Girone, CERN CMS Status Report Maria Girone, CERN David Lange, LLNL.
Computing Operations Roadmap
Ian Bird WLCG Workshop San Francisco, 8th October 2016
SuperB and its computing requirements
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
for the Offline and Computing groups
LHCb Software & Computing Status
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
New strategies of the LHC experiments to meet
Presentation transcript:

ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters and experts Automation and monitoring are essential Networking is getting more and more important –Monte Carlo production for 8 TeV / high pileup in full swing ICHEP is an important milestone Pileup causes long time / event as everywhere, production rate limited despite access to unpledged CPU –Towards 2015 and beyond Planning / implementing the work to be done during LS1 In Distributed Computing as well as in Software – CPU, disk Efficient resource usage is as important as having the resources available

ATLAS in LHCC , LS1, towards LS2; LS3 Upgrade LoI LHC in superb shape (again) –Already collected 5/fb at 8 TeV, present slope 150/pb per day –Ten more days to go until next Machine Development / Technical Stop –So hope for 6-7/fb at 8 TeV for ICHEP, in addition to the 5/fb at 7 TeV We could use many more simulated events for this much real data The improvements during LS1 are the main focus of present S&C week Between LS1 and LS2 ( ) –Expect to run at TeV, at lumi 1.2e34/cm 2 /s, avg pileup ~as now (25), with 25 ns bunch spacing –But if SPS emittance can be improved early on, could reach 2.2e34 even before LS2, pileup 48 (see Paul Collier’s talk, ATLAS Week last week)

ATLAS in LHCC 3 Tier-0 / CERN Analysis Facility Slides prepared by I Ueda Fast re- processing Physics recording average: 420Hz prompt 130Hz delayed

ATLAS in LHCC 4 Tier-0 / CERN Analysis Facility Slides prepared by I Ueda Fast re- processing Rolf Seuster Physics recording average: 420Hz prompt 130Hz delayed

ATLAS in LHCC 5 Grid Data Processing

ATLAS in LHCC 6 CVMFS becoming the only deployment method Importantly, can also test nightly releases on Grid now

ATLAS in LHCC 7 Distributed Data Management

ATLAS in LHCC 8 Disk usage DATADISK plot

ATLAS in LHCC 9 Disk usage DATADISK plot Note: -new DDM monitoring taking shape, pilot in place -based on Hadoop (HFS, PigLatin, Hbase) to be scalable for a long time -Hadoop also being used in other ATLAS places

ATLAS in LHCC 10 Upgrade of S&C: ongoing work on one slide Technical Interchange Meeting at Annecy April –Data Management, Data Placement, Data Storage –Production System, Group Production, Cloud Production –Distributed Analysis, Analysis Tools –Recent trends in Databases, Structured Storage –Networking S&C plenary week June –Focus on upgrades, in distributed computing and in software (TDAQ and offline) –How to make full use of future hardware architectures – implement parallel processing on multiple levels (event, partial event, between and within algorithms) –Enormous potential for improving CPU efficiency, if with enormous effort –Beneficial to work with OpenLab, PH-SFT, IT-ES, CMS…

ATLAS in LHCC 11 CPU efficiency… talk by Andrzej Nowak / OpenLab “The growth of commodity computing and HEP software – do they mix?” Gains from the different levels of parallelism are multiplicative Lower ones are harder to use in software Efficiency of CPU usage on new hardware: HEP reaches few percent of the speedup gained by fully optimized code or by typical code Omnipresent memory limitations hurt HEP - to be overcome first

ATLAS in LHCC 12 ATLAS resource request doc to CRRB (20 March) Scrutiny for 2013 not severe provided there will be no further decrease in October. Need to concentrate on 2014 and beyond (so far assume 2 months of 14 TeV during WLCG year).