Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.

Slides:



Advertisements
Similar presentations
1 14 Feb 2007 CMS Italia – Napoli A. Fanfani Univ. Bologna A. Fanfani University of Bologna MC Production System & DM catalogue.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Stuart Wakefield Imperial College London1 How (and why) HEP uses the Grid.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Claudio Grandi INFN Bologna CHEP'03 Conference, San Diego March 27th 2003 Plans for the integration of grid tools in the CMS computing environment Claudio.
CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 CMS Status and Plans Progress towards GridPP milestones Workload management.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
1 M. Paganoni, HCP2007 Computing tools and analysis architectures: the CMS computing strategy M. Paganoni HCP2007 La Biodola, 23/5/2007.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
June 15, PMG Ruth Pordes Status Report US CMS PMG July 15th Tier-1 –LCG Service Challenge 3 (SC3) –FY05 hardware delivery –UAF support Grid Services.
INFSO-RI Enabling Grids for E-sciencE CRAB: a tool for CMS distributed analysis in grid environment Federica Fanzago INFN PADOVA.
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
David Stickland CMS Core Software and Computing
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
DIRAC Project A.Tsaregorodtsev (CPPM) on behalf of the LHCb DIRAC team A Community Grid Solution The DIRAC (Distributed Infrastructure with Remote Agent.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Maria Girone, CERN CMS Experiment Status, Run II Plans, & Federated Requirements Maria Girone, CERN XrootD Workshop, January 27, 2015.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
Computing Model José M. Hernández CIEMAT, Madrid On behalf of the CMS Collaboration XV International Conference on Computing in High Energy and Nuclear.
CMS Production Management Software Julia Andreeva CERN CHEP conference 2004.
Grid and Data handling Gonzalo Merino, Port d’Informació Científica / CIEMAT Primeras Jornadas del CPAN, El Escorial, 25/11/2009.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
Real Time Fake Analysis at PIC
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
CMS — Service Challenge 3 Requirements and Objectives
Presentation at the International Symposium on Grid Computing
N. De Filippis - LLR-Ecole Polytechnique
ExaO: Software Defined Data Distribution for Exascale Sciences
LHC Data Analysis using a worldwide computing grid
Grid Computing in CMS: Remote Analysis & MC Production
The LHCb Computing Data Challenge DC06
Presentation transcript:

Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia a, J.M. Hernández a, F. Martínez b, G. Merino b, M. Rodríguez b, F.J. Rodríguez-Calonge a a CIEMAT, Madrid, Spain b PIC, Barcelona, Spain Abstract CMS has chosen to adopt a distributed model for all computing in order to cope with the requirements on computing and storage resources needed for the processing and analysis of the huge amount of data the experiment will be providing from LHC startup. The architecture is based on a tier-organized structure of computing resources, based on a Tier-0 center at CERN, a small number of Tier-1centers for mass data processing, and a relatively large number of Tier-2 centers where physics analysis will be performed. The distributed resources are inter-connected using high-speed networks and are operated by means of Grid toolkits and services. We present in this poster, using the Spanish Tier-1 (PIC) and Tier-2 (federated CIEMAT-IFCA) centers as examples, the organization of the computing resource in CMS together with the CMS Grid Services, built on top of generic Grid Services, required to operate the resources and carry out the CMS workflows. The Spanish sites contribute with 5% of the CMS computing resources. We also present the current Grid-related computing activities performed at the CMS computing sites, like high-throughput and reliable data distribution, distributed Monte Carlo production and distributed data analysis, where the Spanish sites have traditionally played a leading role in development, integration and operation. CMS Computing Model Tier-0  Accepts data from DAQ  Prompt reconstruction  Archives and distributes data to Tier-1’s Tier-1  Real data archiving  Re-processing  Calibration  Skimming and other data- intensive analysis tasks  MC data archiving Data Transfer DB TMDB Data Book- keeping DB RefDB LCG replica catalogue User Interface MC Production McRunjob Analysis Tool CRAB Job Monitoring & Bookkeeping BOSS Data Location DB PubDB Resource Broker Grid Information System Computing Element Worker Node Storage Element Data Transfer PhEDEx Site Local Catalogue Application Database LocalGlobalLocal or remote Computing resource Interaction Job flow Data flow Workload Management System Data Management System CMS Computing Services/Workflows End 2005End 2006End 2007 CPU (kSI2k) Disk (TB) Tape (TB) WAN (Gbps)110 End 2005End 2006End 2007 CPU (kSI2k) Disk (TB) Tape (TB)--- WAN (Gbps)1110 Hardware Resources PIC Tier-1 Tier-2 Spain CIEMAT-IFCA  Data Distribution Computing Grid Activities Done with PhEDEx, the CMS large scale dataset replica management system. Managed, reliable, routed multi-hop transfers. Includes pre-staging data from tape Demonstrated sustained transfer rates of ~3TB/day to the Spanish sites.  Distributed Data Analysis Spanish sites host ~15% of CMS data (~15 Mio. events, ~15 TB) ~10K analysis jobs ran at the Spanish sites so far  Distributed Monte Carlo Production Leading role of the Spanish sites in MC production in LCG Grid in the areas of development, integration and operation ~ 15 Mio. MC events produced in LCG Production activities  LCG Service Challenge 3  CMS goal: exercise under realistic conditions bulk data transfer and data serving infrastructure SC3 Throughput phase (July 2005)  Goal: high throughput transfer + storage system test  Sustained rate disk-to disk of ~4TB/day (50 MB/s avg) for 2 weeks to PIC Tier-1 SC3 Service phase (September-November 2005)  Goal: concurrent data transfer and data access from storage  Test Data Management components (data transfer system, data publication, data catalogues, storage systems) as well as Workload Management components (job submission, monitoring, etc) in a large-scale real use.  Automatic data publication for analysis as data get transferred  Automatic analysis job submission as data get published  15 TB of data transferred to the Spanish sites  10K jobs run at the Spanish sites (~20% of the CMS SC3 jobs run in LCG)  175 MB/s aggregate analysis job read throughput at PIC Tier-1 and 75 MB/s at Spain Tier-2 Integration/test activities For more information, contact: José M. Hernández, CIEMAT, Particle Physics Division E-28040, Madrid, Spain