BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Large Scale Test of a storage solution based on an Industry Standard Michael Ernst Brookhaven National Laboratory ADC Retreat Naples, Italy February 2,
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
BNL Facility Status and Service Challenge 3 Zhenping Liu, Razvan Popescu, Xin Zhao and Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
BNL Service Challenge 3 Site Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
ATLAS: Heavier than Heaven? Roger Jones Lancaster University GridPP19 Ambleside 28 August 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
BNL Facility Status and Service Challenge 3 HEPiX Karlsruhe, Germany May 9~13, 2005 Zhenping Liu, Razvan Popescu, and Dantong Yu USATLAS/RHIC Computing.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Factors affecting ANALY_MWT2 performance MWT2 team August 28, 2012.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
BNL Service Challenge 3 Site Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
U.S. ATLAS Tier 1 Networking Bruce G. Gibbard LCG T0/1 Network Meeting CERN 19 July 2005.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
Computing Operations Roadmap
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
Service Challenge 3 CERN
Update on Plan for KISTI-GSDC
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
The LHC Computing Grid Visit of Her Royal Highness
LHC Data Analysis using a worldwide computing grid
Nuclear Physics Data Management Needs Bruce G. Gibbard
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India

CHEP 2006 Mumbai, India Bruce G. Gibbard 2 Introduction §The scale of computing required by modern High Energy and Nuclear Physics experiments can’t be met by single institutions, funding agencies or even countries §Grid computing, integrating widely distributed resources into a seamless facility, is the solution of choice §A critical aspect of such Grid computing is the ability to move massive data sets over great distances in near real time l High bandwidth wide area transfer rates l Long term sustained operations

CHEP 2006 Mumbai, India Bruce G. Gibbard 3 Specific Needs at Brookhaven §HENP Computing at BNL l Tier 0 center for Relativistic Heavy Ion Collider - RHIC Computing Facility (RCF) l US Tier 1 center for ATLAS experiment at the CERN LHC – ATLAS Computing Facility (ACF) §RCF requires data transfers to collaborating facilities l Such as RIKEN center in Japan §ACF requires data transfers from CERN and on to ATLAS Tier 2 Centers (Universities) l Such as Boston, Chicago, Indiana, Texas/Arlington

CHEP 2006 Mumbai, India Bruce G. Gibbard 4 BNL Staff Involved §Those involved in this work at BNL were member of the RHIC and ATLAS Computing Facility and the PHENIX and ATLAS experiments §Not named here there were of course similar contributing teams at the far end of these transfers: CERN, Riken, Chicago, Boston, Indiana, Texas/Arlington M. Chiu W. Deng B. Gibbard Z. Liu S. Misawa D. Morrison R. Popescu M. Purschke O. Rind J. Smith Y. Wu D. Yu

CHEP 2006 Mumbai, India Bruce G. Gibbard 5 PHENIX Transfer of Polarized Proton Data To Riken Computing Facility in Japan §Near Real Time l In particular not to tape storage so no added tape retrieval required l Very shortly after end of RHIC run, transfer should end §Part of RHIC Run in 2005 (~270 TB) §Planned Again for RHIC Run in 2006

CHEP 2006 Mumbai, India Bruce G. Gibbard 6

CHEP 2006 Mumbai, India Bruce G. Gibbard 7 Typical Network Activity During PHENIX Data Transfer

CHEP 2006 Mumbai, India Bruce G. Gibbard 8

CHEP 2006 Mumbai, India Bruce G. Gibbard 9 For ATLAS, (W)LCG Exercises §Service Challenge 3 l Throughput Phase (WLCG and computing sites develop, tune and demonstrate data transfer capacities) July ‘05 Rerun in Jan ‘06 §Service Challenge 4 l To begin in April 2006

CHEP 2006 Mumbai, India Bruce G. Gibbard 10 Read pools DCap doors SRM door doors GridFTP doors doors Control Channel write pools Data Channel DCap Clients Pnfs ManagerPool Manager HPSS GridFTP Clientsd SRM Clients Oak Ridge Batch system dCache System BNL ATLAS dCache/HPSS Based SE

CHEP 2006 Mumbai, India Bruce G. Gibbard 11 Disk to Disk Phase of SC3 §Transfer rate to 150 MB/sec achieved during early standalone operations §Even though FTS (transfer manager) failed to properly support dCache SRMCP degrading performance of BNL Tier 1 dCache based storage element

CHEP 2006 Mumbai, India Bruce G. Gibbard 12 Overall CERN Operations During Disk to Disk Phase §Saturation of network connection at CERN required throttling of individual site performances

CHEP 2006 Mumbai, India Bruce G. Gibbard 13 Disk to Tape Phase

CHEP 2006 Mumbai, India Bruce G. Gibbard 14 dCache Activity During Disk to Tape Phase §Tape Writing Phase l Green indicated income data l Blue indicates data being migrated out to HPSS, the tape storage system §Rate at MBytes/sec were sustained

CHEP 2006 Mumbai, India Bruce G. Gibbard 15 SC3 T1 – T2 Exercises §Transfer to 4 Tier 2 sites (Boston, Chicago, Indiana, Texas/Arlington) resulted in aggregate rates to 40 MB/sec but typically ~15 MB/sec and quite inconsistent §Tier 1 sites only supported Gridftp on classic storage elements and were not prepared to support sustained operations

CHEP 2006 Mumbai, India Bruce G. Gibbard 16 Potential Network Contention §BNL has been operating with an OC 48 ESnet WAN connection with 2 x 1 GB/sec connectivity over to the ATLAS/RHIC network fabric PHENIX sustain transfer to Riken CCJATLAS Service Challenge Test ← ←

CHEP 2006 Mumbai, India Bruce G. Gibbard 17 Network Upgrade §ESnet OC48 WAN connectivity is being upgraded to 2 x §BNL site connectivity from border router to RHIC/ATLAS facility is being upgrade to redundant 20 Gb/sec paths §Internally, in place of previous channel bonding l ATLAS switches are being redundantly connected at 20 Gb/sec l RHIC switches are being redundantly connected at 10Gb/sec §All will be complete by end of this month

CHEP 2006 Mumbai, India Bruce G. Gibbard 18 RHIC/PHENIX Plans ‘06 §RHIC will run again this year with polarized protons and so the data will again be transferred to Riken Center in Japan. §Data taking rates will be somewhat higher with somewhat better duty factor so transfer may have to support rates as much as a factor of two higher §Such running is likely to begin in early March §Expect to use SRM for transfer rather than just Gridftp for additional robustness

CHEP 2006 Mumbai, India Bruce G. Gibbard 19 WLHC Service Challenge 4 §Service challenge transfer goals are for nominal real transfer rates required by ATLAS to US Tier 1 in first years of LHC operation l 200 MB/sec (Disk at CERN to Tape at BNL) l Disk to Disk to begin in April with Disk to Tape to follow as soon as possible l BNL Tier 1 expects to be ready with new tape system in April to do Disk to Tape l BNL is planning on being able to use dCache SRMCP in these transfers §Tier 2 exercises at a much more serious level are anticipated using dCache/SRM on storage elements

CHEP 2006 Mumbai, India Bruce G. Gibbard 20 Conclusions §Good success to date in both ATLAS exercises and RHIC real operations §New round with significantly higher demands within next 1-2 months §Upgrades of network, storage elements, tape systems, and storage element interfacing should make it possible to satisfy these demands