1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
The first year of LHC physics analysis using the GRID: Prospects from ATLAS Davide Costanzo University of Sheffield
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
…building the next IT revolution From Web to Grid…
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
ATLAS Bulk Pre-stageing Tests Graeme Stewart University of Glasgow.
Summary of Services for the MC Production Patricia Méndez Lorenzo WLCG T2 Workshop CERN, 12 th June 2006.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
Dario Barberis: ATLAS Activities at Tier-2s Tier-2 Workshop June ATLAS Activities at Tier-2s Dario Barberis CERN & Genoa University.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks CRAB: the CMS tool to allow data analysis.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
David Stickland CMS Core Software and Computing
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
ATLAS Distributed Analysis DISTRIBUTED ANALYSIS JOBS WITH THE ATLAS PRODUCTION SYSTEM S. González D. Liko
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
CERN IT Department CH-1211 Genève 23 Switzerland t L'infrastructure de calcul pour le LHC Le point de vue d'ATLAS Simone Campana CERN IT/GS.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
Status Report on LHC_2 : ATLAS computing
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
Zhongliang Ren 12 June 2006 WLCG Tier2 Workshop at CERN
An introduction to the ATLAS Computing Model Alessandro De Salvo
LHC Data Analysis using a worldwide computing grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)

2 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Summary First part: Description of the Computing Model Second part: Effective Grid usage ● General remarks ● Experiment usage of the Grid Distributed MC production Distributed analysis

3 S. JEZEQUEL- First chinese-french workshop 13 December 2006 LHC experiments Collision rate : 40 MHz Detectors with 10 6 electronic channels pp or ion/ion collisions : High detector occupancy

4 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Real data production LHCb CMS “30” 2100“1.0” ATLAS ALICE pp ALICE HI Monte Carlo % of real Monte Carlo [MB/evt] AOD [kB] ESD rDST RECO [MB] RAW [MB] Rate [Hz] 10 7 seconds/year pp (2008)  ~2 x 10 9 events/year/experiment  ~5 PB/year/experiment  ~10 TB/day/experiment

5 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Worldwide analysis CMS ATLAS LHCb ~ 5000 Physicists around the world - around the clock “Offline” software effort: 1000 person-years per experiment

6 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Computing Center roles grid for a physics study group Tier3 physics department    Desktop Germany Tier 1 USA UK France Italy Taipei SARA Spain CERN Tier 0 Tier2 Lab a Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x grid for a regional group  Collaboration o LHC Experiments o Grid projects: Europe, US o Regional & national centers  Choices o Adopt Grid technology. o Go for a “Tier” hierarchy  Goal o Prepare and deploy the computing environment to help the experiments analyze the data from the LHC detectors.

7 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Computing Model: CMS Tier-1’s:  Real data archiving  Re-processing  Skimming and other data-intensive analysis tasks  Calibration  MC data archiving Tier-2’s:  User data Analysis  MC production  Import skimmed datasets from Tier-1 and export MC data  Calibration/alignment Tier-0:  Accepts data from DAQ  Prompt reconstruction  Data archive and distribution to Tier-1’s

8 S. JEZEQUEL- First chinese-french workshop 13 December 2006 CMS : T1-T2 association

9 S. JEZEQUEL- First chinese-french workshop 13 December 2006 CMS data transfer: Phedex See G. Chen presentation

10 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS : Grid organisation CERN LYON NG BNL FZK RAL CNAF PIC TRIUMF SARA ASGC LPC Tokyo Beijing Romania GRIF T3 SWT2 GLT2 NET2 WT2 MWT2 T1 T2 T3 VO box, dedicated computer to run DDM services T1-T1 and T1-T2 associations according to GP ATLAS Tiers associations. All Tier-1s have predefined (software) channel with CERN and with each other Tier- 1. Tier-2s are associated with one Tier-1 and form the cloud Tier-2s have predefined channel with the parent Tier-1 only. LYON Cloud BNL Cloud TWT2 Melbourne ASGC Cloud “Tier Cloud Model”

11 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS:T1-T2 links

12 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS Computing Model ● Tier-0-1-2: Roles similar to CMS ● Tier-3/Analysis Farm ● Store user data ● Run user analysis/development No particular stream attached to a T1 cloud Each cloud has a complete copy of AOD

13 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS:Data transfers IN2P3 T0 T2 Beijing T0 or T1 (IN2P3) : Mass Storage + Critical Services T1 DDM : ATLAS tool to manage transfers to list files at cloud level DDM Backup Today bottleneck: Transfer-Mass Storage dialog (10 % loss when heavily loaded with painfull recovery)

14 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Challenge: Make all grid components and hardware infracstures work together ATLAS:First tests T0→T1→T2 transfers See G. Rahal presentation End July 2006 No other simultaneous transfer First EGEE operationnal cloud LYON= CC-IN2P3

15 S. JEZEQUEL- First chinese-french workshop 13 December 2006 End-user life with Grid: global remarks

16 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Effective usage of Grid Each day, a new failure of the Grid services occurs Grid implementation (certificates,...) Availability of services at sites See H. Cordier presentation

17 S. JEZEQUEL- First chinese-french workshop 13 December 2006 CMS: Example of failure rate See A. Trunov presentation

18 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: global end-user vision Grid tools completly changed in 2 years and still in development Tool development rely on very few people (→delays) Today situation Submission of jobs on the grid : almost OK Write and read access to data not optimised to be transparent to different storage technologies (Castor, dcache, DPM,...)

19 S. JEZEQUEL- First chinese-french workshop 13 December 2006 First solutions in debugging period Local experts have direct contacts with sites especially T1s, CERN and T2s within cloud (Otherwise use GGUS: response time : Random) Press sites to provide reliable services ATLAS: Recommand users to use few sites at T1 level, close collaboration between IN2P3 (FR), FZK (DE) and BNL (US)

20 S. JEZEQUEL- First chinese-french workshop 13 December 2006 End-user life with Grid: grid usage by ATLAS/CMS

21 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS:Dialog between Athena/Grid ATLAS Software: Athena (No direct interaction with the Grid) Steps: Check possibility of direct access to local data or use Grid commands to transfer them locally Build joboption using informations from the LFC catalog Run Athena job accessing to local files Output files are copied locally or on a grid SE

22 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS:Production of simulated data ● Most reliable tool ● Production occuring in T2/T3 (also T1 now) ● Output data collected in the T1 in the cloud ● AODs are replicated to other T1 ● ESD/AOD can be replicated to T2 sites

23 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS MC Production system EGEE NorduGrid OSG EGEexe EGEE exe NG exe OSG exe super DDM (Data Management) Python DQ2 Eowyn Tasks PanDADulcinea LexorLexor-CG LSF exe super Python T0MS ProdDB (jobs)

24 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS:Monitoring FR cloud: MC prod.

25 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS:Simulation Part of the production chain which is now able to use efficiently the Grid Need one available CPU One input file Few read/write access Submission time << Execution time Needs only few experts to run production tools

26 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS: Production rate Recent evolutions: ● VOMS role (production) attributed to jobs ● Sites requested to implement 80% priority to production jobs Goal end-2006

27 S. JEZEQUEL- First chinese-french workshop 13 December 2006 ATLAS : MC prod. efficiency

28 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Efficiency: CMS

29 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Data Analysis on the Grid

30 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Data analysis:Mid-2006 Mid-2006: Work with simulated data representing few days of data taking Use local copies of data (Replication tool (DDM) starts to be used) Analysis jobs are run locally and interactively or within local batch system

31 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Requirements for Data analysis on Grid Analysis t ools are used by non-grid aware physicists (Should work as prescribed in the twiki) Low level grid commands and limitations due to grid or infrastructure unknown Need simple and reliable tools

32 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Data analysis (2) Key point :Jobs are I/O intensive → Fast access to the data Local access to data better than through Grid commands: ● much faster ● request less ressources on disk servers (more concurrent accesses are possible ) More efficient to send jobs where the data are and use local commands CMS : xroot whenever possible ATLAS: 3 types of commands depending on the SE technology (Castor: rfio Castor / DPM: rfio DPM / dcache : dcap)

33 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Data analysis:End 2006 Analysis outside CERN is starting (with grid) Mandatory: Dataset (collection of files) should be gathered on sites ATLAS: 2 users interfaces on the market: ● Panda (made in BNL) Working efficiently in BNL First trials on LCG ● Ganga (made in Europe: LHCb/ATLAS) Start to be used by european users IN2P3: one of the few validation sites CMS: Crab Working on T1 and T2s

34 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Distributed Data Analysis on T1s Before data taking starts, possible to prepare data analysis on T1 Why start with T1 ? ● where the data are located nowdays ● availability of Grid experts ● huge CPU availability ● already organised to work 24/24 and 7/7 Coordinated work to share data: FZK, BNL and CC-IN2P3 (ATLAS)

35 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Data analysis on T2 One of the key T2 activity ● Beijing T2 stores data on dcache servers also used by CC-IN2P3 → Could benefit directly from the experience in CC-IN2P3 Important to do asap analysis on T2 sites ● to be ready for data taking period ● create common interest between local physicists/site managers

36 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Conclusion Grid tool usage today: become more and more popular not yet robust enough against intensive/chaotic usage ● Simulation of LHC events Now performed only with Grid Next step: improve the efficiency To speed up the production To devote more ressources to analysis Active participation of Beijing T2 is possible

37 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Conclusion (2) Useful data for physicists replicated in T1s and T2s to enable distributed analysis (which data stored in Beijing T2 ?) Distributed analysis in T1s becomes reality Start analysis on T2s Opportunity to start now on Beijing T2 Being efficient with Grid tools will be mandatory to actively participate to LHC analysis groups

38 S. JEZEQUEL- First chinese-french workshop 13 December 2006 With better network connection and more human implication in expt/Grid interface possibility for the Beijing T2 Have a visible participation in LHC grid activities Enable competitive analysis in China in collaboration with some other sites