June 10, 2008 1 D0 Use of OSG D0 relies on OSG for a significant throughput of Monte Carlo simulation jobs, will use it if there is another reprocessing.

Slides:



Advertisements
Similar presentations
Applications Area Issues RWL Jones GridPP13 – 5 th June 2005.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Jan 2010 Current OSG Efforts and Status, Grid Deployment Board, Jan 12 th 2010 OSG has weekly Operations and Production Meetings including US ATLAS and.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
SCD FIFE Workshop - GlideinWMS Overview GlideinWMS Overview FIFE Workshop (June 04, 2013) - Parag Mhashilkar Why GlideinWMS? GlideinWMS Architecture Summary.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
OSG Public Storage and iRODS
OSG Area Coordinators Campus Infrastructures Update Dan Fraser Miha Ahronovitz, Jaime Frey, Rob Gardner, Brooklin Gore, Marco Mambelli, Todd Tannenbaum,
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
LcgCAF:CDF submission portal to LCG Federica Fanzago for CDF-Italian Computing Group Gabriele Compostella, Francesco Delli Paoli, Donatella Lucchesi, Daniel.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
G RID M IDDLEWARE AND S ECURITY Suchandra Thapa Computation Institute University of Chicago.
Overview of day-to-day operations Suzanne Poulat.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Interactive Job Monitor: CafMon kill CafMon tail CafMon dir CafMon log CafMon top CafMon ps LcgCAF: CDF submission portal to LCG resources Francesco Delli.
Concept: Well-managed provisioning of storage space on OSG sites owned by large communities, for usage by other science communities in OSG. Examples –Providers:
11/30/2007 Overview of operations at CC-IN2P3 Exploitation team Reported by Philippe Olivero.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
OSG site utilization by VOs Ilya Narsky, Caltech.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Key Project Drivers - an Update Ruth Pordes, June 14th 2008, V2: June 23 rd. These slides are in addition to the information available in
Dzero MC production on LCG How to live in two worlds (SAM and LCG)
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
PPDG update l We want to join PPDG l They want PHENIX to join NSF also wants this l Issue is to identify our goals/projects Ingredients: What we need/want.
Status of PDC’07 and user analysis issues (from admin point of view) L. Betev August 28, 2007.
Run II Review Closeout 15 Sept., 2005 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.
April 26, Executive Director Report Executive Board 4/26/07 Things under control Things out of control.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
June 15, PMG Ruth Pordes Status Report US CMS PMG July 15th Tier-1 –LCG Service Challenge 3 (SC3) –FY05 hardware delivery –UAF support Grid Services.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
D0 Event Production volume (Residual impact of Joint D0-OSG Taskforce, overall till July 2010) Joint D0-OSG TaskForce Started using Opportunistic Storage.
OPERATIONS REPORT JUNE – SEPTEMBER 2015 Stefan Roiser CERN.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Status of gLite-3.0 deployment and uptake Ian Bird CERN IT LCG-LHCC Referees Meeting 29 th January 2007.
TANYA LEVSHINA Monitoring, Diagnostics and Accounting.
July 26, 2007Parag Mhashilkar, Fermilab1 DZero On OSG: Site And Application Validation Parag Mhashilkar, Fermi National Accelerator Laboratory.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Job submission overview Marco Mambelli – August OSG Summer Workshop TTU - Lubbock, TX THE UNIVERSITY OF CHICAGO.
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
CERN IT Department CH-1211 Genève 23 Switzerland t DPM status and plans David Smith CERN, IT-DM-SGT Pre-GDB, Grid Storage Services 11 November.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
5/12/06T.Kurca - D0 Meeting FNAL1 p20 Reprocessing Introduction Computing Resources Architecture Operational Model Technical Issues Operational Issues.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
Dynamic Extension of the INFN Tier-1 on external resources
Key Project Drivers - FY10 Ruth Pordes, June 15th 2009
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
Akiya Miyamoto KEK 1 June 2016
Interoperability & Standards
How I learned to Stop Worrying and Love Preemption
DØ MC and Data Processing on the Grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

June 10, D0 Use of OSG D0 relies on OSG for a significant throughput of Monte Carlo simulation jobs, will use it if there is another reprocessing needed, and is testing analysis on the infrastructure. Average weekly OSG production for the past year is 3.4M events. The goal is to increase this to 5.0M events. This is expected to continue for more than the next 2-3 years. Efficiency is a large issue - in terms of use of useful throughput and effort.

June 10, Issues The D0-OSG meeting raised several issues:  Overall efficiency  Difficulty of mining Condor-logs to diagnose problems on D0 SAMGrid submission nodes.  Regular collection of D0 accounting to compare /check with OSG accounting information. As a result: D0 reports its successful throughput together with main issues weekly to the OSG-accounting-info mail readers.  e.g. May 30th: Purdue has problem with number of files for DZero jobs. Only site with this problem. Stopped sending jobs there. Ticket was submitted. After negotiation DZero file quota was raised. Production not resumed yet. Troubleshooting, Jamie Frey of Condor, helping with understanding /diagnosing problems on submission node. D0 post more monitoring information which helps with identifying problem areas early. D0 have identified that having local storage improves the efficiency of a site.

June 10, Number of Local Jobs Code Application Efficiency Use Local Storage Overall Efficiency grid1.oscer.ou.edu N tier2-01.ochep.ou.edu N iut2-grid6.iu.edu Y msu-osg.aglt2.org * down due to power problems 491 NoneY caps10.phys.latech.edu N0.098 abitibi.sbgrid.org N0.006 condor1.oscer.ou.edu N ouhep0.nhn.ou.edu Y0.609 pg.ihepa.ufl.edu N hg.ihepa.ufl.edu N0.226 umiss001.hep.olemiss.edu N0.309 cit-gatekeeper.ultralight.org642NoneN0.000 osg1.loni.org N red.unl.edu * authentication problem since fixed Y0.146 antaeus.hpcc.ttu.edu N0.098 d0cabosg2.fnal.gov Y0.718 osg-ce.sprace.org.br * not sure if local storage available to DZero because of CMS activities N 0.152

June 10, Efficiency vs Number of Jobs

June 10, Request for allocation of Local Storage Statistics suggest that the efficiency increases by about a factor of two when there is a local Storage Element (SRM interfaced) - on the site LAN - where D0 data can be moved and then accessed by the application on the local worker nodes through the use of GridFTP. The space needed is ~300 Gigabytes per site.  D0 then manages this as part of the job submissions. Have tested with dCache SEs, should work with Bestman and xrootd and D0 are happy to test with these if storage is available.

June 10, Request to Council Are there additional sites where D0 can efficiently run? Are there additional sites that can allocate and support D0 local and/or opportunistic storage ?