US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

M.C. Vetterli – WLCG-OB, CERN; October 27, 2008 – #1 Simon Fraser Status of the WLCG Tier-2 Centres M.C. Vetterli Simon Fraser University and TRIUMF WLCG.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
Claudio Grandi INFN Bologna CMS Operations Update Ian Fisk, Claudio Grandi 1.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Operational Experience with CMS Tier-2 Sites I. González Caballero (Universidad de Oviedo) for the CMS Collaboration.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
Discussion Topics DOE Program Managers and OSG Executive Team 2 nd June 2011 Associate Executive Director Currently planning for FY12 XD XSEDE Starting.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
UCSD CMS 2009 T2 Site Report Frank Wuerthwein James Letts Sanjay Padhi Abhishek Rana Haifen Pi Presented by Terrence Martin.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
May Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
State of LSC Data Analysis and Software LSC Meeting LIGO Hanford Observatory November 11 th, 2003 Kent Blackburn, Stuart Anderson, Albert Lazzarini LIGO.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
The CMS Computing System: getting ready for Data Analysis Matthias Kasemann CERN/DESY.
June 15, PMG Ruth Pordes Status Report US CMS PMG July 15th Tier-1 –LCG Service Challenge 3 (SC3) –FY05 hardware delivery –UAF support Grid Services.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Operational Experience with CMS Tier-2 Sites I. González Caballero (Universidad de Oviedo) for the CMS Collaboration.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
Maria Girone, CERN  CMS in a High-Latency Environment  CMSSW I/O Optimizations for High Latency  CPU efficiency in a real world environment  HLT 
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
The Beijing Tier 2: status and plans
The Status of Beijing site, and CMS local DBS
CMS transferts massif Artem Trunov.
CC IN2P3 - T1 for CMS: CSA07: production and transfer
LHC Data Analysis using a worldwide computing grid
Presentation transcript:

US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers2 US Tier 2s for CMS There are 7 CMS Tier- 2 centers in the US. The US-Tier 2s are hosted at universities and are OSG sites. Funding through the US-CMS Software and Computing Project. L2 coordinator for US- CMS T2s: Ken Bloom, U. of Nebraska

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers3 Current US CMS T2 Resources US CMS T2 Site CPU (kSI2K) Disk (TB)WAN (Gb/s) Caltech Florida MIT Nebraska Purdue UCSD Wisconsin

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers4 US-CMS Tier2s In 2008 we expect all US T2 sites to be operating with 1M SI2K of CPU, 200TB disk and 10Gbit/s WAN - nearly double the current resource levels. Note: In CMS about half the T2 resources are dedicated to simulation and the other half to analysis activities. Some details All US T2 sites use SL4 with Condor (or PBS), Disk pools are managed by dCache with SRM as an interface for remote data transfers. Gratia provides the accounting services. CMSSW software and CMS services such as PheDEX are installed at the sites.

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers5 US -Tier 2 coordination and operations T2 sites are operated by the university groups. –Close connection to the CMS local user community –Local management of the facilities, purchases –Coordination through US CMS S&C project Each T2 site has 2 FTEs for operations: –This includes 1 FTE for operations of the storage system. Good communication and planning has paid off –The T2’s have a close collaboration with Fermilab and the FNAL T1 group. –There is a US T2 meeting every 2 weeks. –There seems to be good coordination among the sites. T2s provide help for US-Tier3 sites, other OSG sites (Brazil, for example). The sites have often done early deployment and been testers for upgrades to software and services for CMS and OSG.

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers6 US-CMS T2 in CSA06 US-CMS T2 sites were leaders during CSA06 in number of jobs hosted. Note the high success rates for these jobs.

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers7 Monte Carlo production for CSA07 text US Tier 2 sites are making a major contribution to CMS MC event production: 66M/196M events in the recent months. Expectation for 2007: 50M events per month at all T2 sites. This has been achieved.

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers8 Commissioning CMS Computing Model calls for analysis dataset transfers from Tier-1 to non-regional Tier-2s. –The CMS commissioning team has set up a program of work to debug these links and bring them to production quality. Metric for a commissioned T1 T2 link: 4 out of 5 days above 300 GB/day; in excess of 1.7TB for 5 days. Links will be “decommissioned” or removed from production after 7 days with <300 GB/day. Only commissioned links will be used in production for CSA07. –This is a work in progress and it has been vacation season. Stay tuned. The seven US-CMS Tier 2 sites are commissioned and are used in production. –All US-Tier-2 links for transfers to/from the FNAL-Tier-1 have been commissioned. –These sites were used extensively in CSA06 and in pre-CSA07 MC production.

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers9 CMS Site Availability Testing Hourly monitoring of sites with CMS SAM tests has improved site availability for CMS. CMS SAM “Green” correlates to high job success rate at site. US T2 sites normally have good availability rankings >80%. 80% Includes OSG/CMS sites in Brazil

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers10 Data Transfers to US-CMS T2 Centers Snapshot of daily transfers

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers11 Data Transfers rates to T2s Average daily transfer rate to US T2s sites is ~80 MB/sec. These rates are dominated by transfers from Fermilab Target: MB/s depending on the link

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers12 Transfers: Source to all US-Tier2 Most transfers to US T2 sites are from FNAL, but CMS has started to commission links to the T2 sites from other (non-regional) T1 sites. All transfers to US T2 Source: Non-regional T1s

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers13 User analysis at the Tier-2s The number of jobs hosted at the seven US CMS T2s is routinely more than 1k/day. We expect the number of jobs to increase during CSA07 when more MC data sets will be hosted at T2 sites for the analysis. (Users needed!)

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers14 Data hosting/data managers Each T2 site will have a data manager who decides which datasets are hosted at the site. –Users make requests which are honored if space is available. T2 disk is regarded as cache; datasets will not reside permanently at a T2. The T2 system is designed for frequent replacement of data sets determined by the needs of the local users or the physics groups. Data managers at US Tier 2 sites. Coordinating the placement of skim datasets at the US-T2 will be tested in CSA07.

US-CMS T2 Centers August 31, 2007US-CMS T2 Centers15 Summary The US Tier 2 sites for CMS have been working well. In early 2008, we expect all sites to be operating at the full capacity required for startup. For more details on US CMS T2 computing, see the CHEP poster by Kenneth Bloom “US CMS Tier-2 Computing”. US-Tier 2s facilities are well organized and there are communication paths between sites to the Tier-1 at Fermilab. –We followed this example and created a new position in the CMS computing org: the T2 liaison. Ken Bloom and Giuseppe Bagliesi are now the liaisons to the T2 sites around the globe.