The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
Storage: Futures Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 8 October 2008.
Implementing Finer Grained Authorization in the Open Science Grid Gabriele Carcassi, Ian Fisk, Gabriele, Garzoglio, Markus Lorch, Timur Perelmutov, Abhishek.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES News on monitoring for CMS distributed computing operations Andrea.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
Integration Program Update Rob Gardner US ATLAS Tier 3 Workshop OSG All LIGO.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
Enabling Grids for E-sciencE Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008.
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
Stefano Belforte INFN Trieste 1 Middleware February 14, 2007 Resource Broker, gLite etc. CMS vs. middleware.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
CHEP 2006, February 2006, Mumbai 1 LHCb use of batch systems A.Tsaregorodtsev, CPPM, Marseille HEPiX 2006, 4 April 2006, Rome.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
INFSO-RI Enabling Grids for E-sciencE Introduction Data Management Ron Trompert SARA Grid Tutorial, September 2007.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Alexandre Duarte CERN IT-GD-OPS UFCG LSD 1st EELA Grid School.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
The new FTS – proposal FTS status. EMI INFSO-RI /05/ FTS /05/ /05/ Bugs fixed – Support an SE publishing more than.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Baseline Services Group Report LHCC Referees Meeting 27 th June 2005 Ian Bird IT/GD, CERN.
DIRAC Pilot Jobs A. Casajus, R. Graciani, A. Tsaregorodtsev for the LHCb DIRAC team Pilot Framework and the DIRAC WMS DIRAC Workload Management System.
Baseline Services Group GDB CERN 18 th May 2005 Ian Bird IT/GD, CERN.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
The GridPP DIRAC project DIRAC for non-LHC communities.
Status of gLite-3.0 deployment and uptake Ian Bird CERN IT LCG-LHCC Referees Meeting 29 th January 2007.
The VOMS and the SE in Tier2 Presenter: Sergey Dolgobrodov HEP Meeting Manchester, January 2009.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
INFSO-RI Enabling Grids for E-sciencE FTS Administrators Tutorial for Tier-2s Paolo Badino
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
CERN IT Department CH-1211 Genève 23 Switzerland t DPM status and plans David Smith CERN, IT-DM-SGT Pre-GDB, Grid Storage Services 11 November.
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
EGEE Data Management Services
Jean-Philippe Baud, IT-GD, CERN November 2007
StoRM: a SRM solution for disk based storage systems
Status of the SRM 2.2 MoU extension
Baseline Services Group Report
gLite Grid Services Salma Saber
GDB 8th March 2006 Flavia Donno IT/GD, CERN
Introduction to Data Management in EGI
Ákos Frohner EGEE'08 September 2008
Data Management cluster summary
LHC Data Analysis using a worldwide computing grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY

CMS TOP 5 WLCG services concerns2/7 Baselines Services (Rep ): CMS Priority A: high priority and mandatory: 1) Storage Element –Interfaces: SRM, gridFTP, POSIX-I/O, authentication/ authorization/ accounting 2) Basic data transfer tools –gridftp and srmCopy 3) Reliable file transfer service –Wrapping basic gridFTP or srmCopy –Providing overall scheduling, control, monitoring as well as reliability 6) Compute Element –Defined set of interfaces to local resources,

CMS TOP 5 WLCG services concerns3/7 Baselines Services (Rep ): CMS Priority A: high priority and mandatory: 7) Workload Management –Overall WLMS essential for some experiments, but not all 8) VO agents 9) VOMS –VO management system providing extended proxies describing roles, groups, and sub-groups 10) Database services –For catalogues, VOMS, FTS, as well as experiment-specific components 15) Information system

CMS TOP 5 WLCG services concerns4/7 Baselines Services (Rep ): CMS Priority B: Standard solutions required, but experiments could select different implementations: 3) Reliable file transfer service –Wrapping basic gridFTP or srmCopy –Providing overall scheduling, control, monitoring as well as reliability 4) Catalogue services –Interfaces: POOL, DLI/SI, POSIX-I/O –including a reliable asynchronous update service

CMS TOP 5 WLCG services concerns5/7 Baselines Services (Rep ): CMS Priority C: Desirable to have common solutions, but not essential: 5) Catalogue and data management tools 11) Posix-like I/O service 12) Application software installation 13) Job Monitoring tools 14) Reliable messaging/information transfer service

CMS TOP 5 WLCG services concerns6/7 The CMS Top 5 Issues &Concerns 1) Storage Element: CASTOR functionality and performance Issues & Concerns are: –The single CASTOR request queue is a potential bottleneck –Under the stress of data taking the global priorities must work and there is sufficient capacity in the system –Performance for use-cases at CERN and at Tier-1 sites –Disk mover limitations, use of xrootd tbd. 2) Storage Element: SRM interface with functionality and performance CMS especially interested in: –Name space interaction will allow CMS to alleviate the need for local VO services for data management (if can be made to scale) –SRM explicit delete –good support for SRM copy in push mode –capability to control access rights consistently across different SE's using VOMS

CMS TOP 5 WLCG services concerns7/7 The CMS Top 5 Issues/Concerns 3) FTS servers –transfers and queues monitor –backup servers –heartbeat and similar ”service-is-up" monitoring 4) Workload Management –Capable of fully utilizing available compute resources –Scaling to 200k jobs / day –Strategy to converge on few (2-3) submission tools to be used as best fit –Candidates: gLite RB, Condor-G submission –We need to improve the transparency of job submission and diagnosis for the users 5) Scalability, reliability and redundancy of information system. –mitigate the dependency from central CERN BDII