ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.

Slides:



Advertisements
Similar presentations
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Advertisements

Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Alexei Klimentov : ATLAS Computing CHEP March Prague Reprocessing LHC beam and cosmic ray data with the ATLAS distributed Production System.
OSG Area Coordinator’s Report: Workload Management February 9 th, 2011 Maxim Potekhin BNL
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
LCG Introduction John Gordon, STFC-RAL GDB September 9 th, 2008.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
SC4 Planning Planning for the Initial LCG Service September 2005.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Operations model Maite Barroso, CERN On behalf of EGEE operations WLCG Service Workshop 11/02/2006.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
OSG Area Coordinator’s Report: Workload Management February 9 th, 2011 Maxim Potekhin BNL
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Grid Computing 4 th FCPPL Workshop Gang Chen & Eric Lançon.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Status of WLCG FCPPL project
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Status report on LHC_2: ATLAS computing
Status Report on LHC_2 : ATLAS computing
Data Challenge with the Grid in ATLAS
ATLAS activities in the IT cloud in April 2008
LCG-France activities
Update on Plan for KISTI-GSDC
Offline data taking and processing
CMS transferts massif Artem Trunov.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Project: COMP_01 R&D for ATLAS Grid computing
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
R. Graciani for LHCb Mumbay, Feb 2006
Grid Computing 6th FCPPL Workshop
DØ MC and Data Processing on the Grid
Presentation transcript:

ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10

Outline Data collected by ATLAS in ATLAS Data processing model Processing of ATLAS data Analysis of ATLAS data Report about collaboration

Collected data 20 Nov – 23 Dec : first physics run at √s = 900 GeV (few hours √s = 2.36 TeV)  ATLAS recorded ~ 12 μ b -1, 0.5M events 16 Dec- 28 Feb : Winter technical stop Since 30 March : LHC is running at √s = 7 TeV 1 st W 1 st Z

ATLAS Computing Model Three sources of data: – RAW + First processing from detector (Tier0) – Simulated data (Tier2) – Data Reprocessing + simulation reprocessing (Tier1) CERN LYON TOKYOFrench T2s MB/s per day Total data throughput through the Grid: 1 st January to 25 th May 2010 MC reprocessing 2009 data reprocessing JanFeb March April May Start of 7 TeV data-taking 6 GB/s Data and MC reprocessing ~2 GB/s (design)

Hardware ressources in 2010 for ATLAS Lyon (Tier1): CPU : Hep-spec06 ; Disk : 2.8 PB Tape : 1.6 PB Tokyo Tier2: CPU : Hep-spec06 ; Disk : 1.0 PB FR Tier2s (Irfu,LAL,LAPP,LPC,LPNHE): CPU : Hep-spec06 ; Disk : 1.4 PB

Data reprocessing and distribution RAW data collected untill April 2010 processed in 3 days (~60 TB). Can still be faster Data distribution to be done as data are produced → >1.5 Gb/s for site hosting 100% of data (Tokyo, Paris)

Data distribution Lyon->Tokyo Data export rate from Lyon to Tokyo is similar or faster than from Asia/America Good results from close collaboration Tokyo/France Data are available in Tokyo/FR Tiers2 only few hours after data production Lyon BNL (USA-Long Island) Triumf (Canada-Vancouver) ASGC (Taiwan) Tokyo New York SINET GEANT RENATER Lyon 10 Gb/s RTT=300 ms

First analysis stress test STEP09 (June09) Coordinated measurement of processing rate before data taking (2009) Tokyo and most FR sites in the top 10 – Benefit from close discussions between site admins (mails or visits or monthly phone meeting)

Hosting data for physics group Efficient/reliable sites labeled during STEP09 → Can take responsibility to host reduced data (usually Ntuples of selected events) for major physic/performance groups – Opportunity to get some data of interest for local teams – Number of groups scales with size (CPU/storage) of sites Groups in Tokyo : 6 Groups in french sites : 8 Usually complementary

Data analysis In April and May: >900 users, 6.1 million successful jobs (+30% more failed) Reality is 2-3x larger than our average pre-data experiences, though previous tests were essential preparation. Real user analysis activity started -> Optimisation of analysis tools (Benefit from contact with main developper (japanese)) STEP’09 Real data x2-3 increase

Condition Database access for analysis over WAN T1 Oracle Tokyo Direct access RTT (300 ms) Typical analysis job : ~1 evt/sec Typical analysis job : ~1 evt/sec ~10 evt/sec Fr site Caching at T1 & T2 ~10 evt/sec

Members French groupJapanese group E. Lançon*CEA-IrfuT. Mashimo*ICEPP G. RahalIN2P3-CCI. UedaICEPP C. BiscaratIN2P3-CCH. MatsunagaICEPP D. BoutignyIN2P3-CCN. MatsuiICEPP F. HernandezIN2P3-CCJ. TanakaICEPP J. SchwindlingCEA-IrfuT. KawamotoICEPP S. JézéquelIN2P3- LAPP ( * leader)

Members (Roles) ATLAS central operation and production Tier 1 Tier2s Cover all different activities in ATLAS Computing Daily interactions with I. Ueda to Ensure that Tokyo has the same quality of service as french sites Share cloud survey

Common meetings Tokyo (mainly I. Ueda) participates actively to LCG- France meetings: Annual meeting (global review) Monthly meetings (technical meetings) Visits from ATLAS-France to Tokyo Participation to the ATLAS Asia-Pacific Computing workshop in Tokyo (December 09)

Perspective 2010 Continue close collaboration between France and Japan – Exchange informations between sites – Common requests during ATLAS meetings Improve the attractiveness of sites for Grid analysis – Physics groups – Indivual users Adapt to increase by factor 1000 of integrated luminosity (scalability issues)