U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Experience with ATLAS Data Challenge Production on the U.S. Grid Testbed Kaushik De University of Texas at Arlington CHEP03 March 27, 2003.
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
U.S. ATLAS Computing Facilities Bruce G. Gibbard DOE/NSF LHC Computing Review Germantown, MD 8 July, 2004.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
ATLAS Data Challenge Production and U.S. Participation Kaushik De University of Texas at Arlington BNL Physics & Computing Meeting August 29, 2003.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
TechFair ‘05 University of Arlington November 16, 2005.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
K. De UTA Grid Workshop April 2002 U.S. ATLAS Grid Testbed Workshop at UTA Introduction and Goals Kaushik De University of Texas at Arlington.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
ATLAS Data Challenge Production Experience Kaushik De University of Texas at Arlington Oklahoma D0 SARS Meeting September 26, 2003.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
BNL Service Challenge 3 Site Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
OSG Integration Activity Report Rob Gardner Leigh Grundhoefer OSG Technical Meeting UCSD Dec 16, 2004.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
U.S. ATLAS Tier 2 Computing Center
Southwest Tier 2 Center Status Report
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
The INFN TIER1 Regional Centre
Southwest Tier 2.
Nuclear Physics Data Management Needs Bruce G. Gibbard
LHC Tier 2 Networking BOF
Presentation transcript:

U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005

B. Gibbard GDB Meeting 2 US ATLAS Facilities  Grid3/OSG Connected Resources Including …  Tier 1 Facility at Brookhaven  Tier 2 Facilities  2 Prototype Tier 2’s in operation  Indiana-Chicago  Boston  3 (of 5) Permanent Tier 2 sites recently selected  Boston-Harvard  Southwest (UTA, OU, UNM, LU)  Midwest (Chicago-Indiana)  Other Institutional (Tier 3) Facilities Active  LBNL, Michigan, ANL, etc.  Associated Grid/Network Activities  WAN Coordination  Program of Grid R&D  Based on Work of Grid Projects ( PPDG, GriPhyN, iVDGL, EGEE, etc.)  Grid Production & Production Support

16 March 2005 B. Gibbard GDB Meeting 3 Tier 1 Facility  Primary Tier 1 Functions  Archive and perform post calibration (and all later) reconstruction of a share of the raw data  Group level programmatic analysis passes  Store, reprocess, and serve ESD, AOD, TAG & DPD sets  Collocated and Cooperated with RHIC Computing Facility  Combined 2005 capacities …  CPU – 2.3 MSI2K (3100 CPU’s)  RHIC – 1.8 MSI2K (2600 CPU’s)  ATLAS – 0.5 MSI2K (500 CPU’s)  Disk – 730 TBytes  RHIC – (220 Central Raid Linux Dist) TBytes  ATLAS – (25 Central Raid Linux Dist) TBytes  Tape – 4.5 PBytes (~2/3 full)  RHIC – 4.40 PBytes  ATLAS – ~100 TBytes

16 March 2005 B. Gibbard GDB Meeting 4 Tier 1 Facility 2005 Evolution  Staff Increase  By 4 FTE’s, 3 to be Grid/Network development/support personnel  Staff of 8.5 in 2004 to 12.5 by end of year  Equipment Upgrade - Expected to be operational by end of April  CPU Farm: 200 kSI2k  500 kSI2k  128 x (2 x 3.1 GHz, 2 GB, 1000 GB) … so also ~130 TB local disk  Disk: Focus this year only on disk distributed on Linux procurement  Total ~225 TB = ~200 distributed + 25 central

16 March 2005 B. Gibbard GDB Meeting 5 Operation Activities at Tier 1  Grid3 based ATLAS Data Challenge 2 (DC2) production  Major effort over much of 2004: allocated ~70% of resource  Grid3/OSG based Rome Physics Workshop production  Major effort of early 2005  Increasing general US ATLAS use of facility  Priority (w/ preemption) scheme: DC2-Rome/US ATLAS/Grid3 jobs.  Resource allocation (w/ highest priority): DC2-Rome/US ATLAS : 70/30%

16 March 2005 B. Gibbard GDB Meeting 6 Tier 2 Facilities  Tier 2 Functions  Primary ATLAS resource for simulation  Primary ATLAS location for final analyses  Expect there to be 5 Permanent Tier 2’s  In aggregate will be comparable to the Tier 1 with respect to CPU and disk  Defined scale for US Tier 2’s  Standard Tier 2’s supported at a level of  ~2 FTE’s plus MST  Four year refresh for ~1000 CPU’s plus infrastructure  2 Special “Cyber Infrastructure” Tier 2C’s will receive additional funding but also additional responsibilities  Selection of first 3 (of 5) permanent sites recently announced  Boston Univ. & Harvard Univ. – Tier 2C  Midwest (Univ. of Chicago & Indiana Univ.) – Tier 2  Southwest (Univ. of Texas at Arlington, Oklahoma Univ., Univ. of New Mexico, Langston Univ.) – Tier 2C  Remaining 2 sites to be selected in 2006

16 March 2005 B. Gibbard GDB Meeting 7 Prototype Tier 2 Centers  Prototype Tier 2’s in operation as part of Grid3/OSG  Principle US contributors to ATLAS DC1 and DC2  Currently major contributors to production in support of Rome Physics Workshop  Aggregate capacities similar to Tier 1

16 March 2005 B. Gibbard GDB Meeting 8 DC2 Jobs Per Site 69 sites ~ Jobs UTA BU BNL UC IU

16 March 2005 B. Gibbard GDB Meeting 9 Rome Production Ramp-up

16 March 2005 B. Gibbard GDB Meeting 10 Grid / OSG Related Activities  Two SRM’s Tested, Deployed, and Operational at Tier 1  HRM/SRM from LBNL: HPSS capable out of the box  dCache/SRM from Fermilab/DESY: dCache compatible out of the box  Interoperability issues between above addressed and basic level of compatibility with CASTOR/SRM verified  Evaluation of appropriate SRM choice for Tier 2’s underway  Cyber Security and AAA for VO Management Development at Tier 1  GUMS: a Grid identity mapping service working with suite including VOM/VOMS/VOX/VOMRS  Consolidation of ATLAS VO registry with US ATLAS as a subgroup.  Privilege management project underway in collaboration with Fermilab/CMS  Dynamic assignment of local access.  Role based authorization.  Faster policy implementation.

16 March 2005 B. Gibbard GDB Meeting 11 Grid / OSG Related Activities (2)  OSG Integration and Deployment Activities  US ATLAS Tier 1 and Tier 2’s are part of Integration Test Bed  Where OSG components are tested and certified for deployment  LCG Deployment  Using very limited resources and only modest effort  Nov ’04: LCG  Recently completed upgrade to LCG  Now extending upgrade to use SLC  Primarily function is to facilitate comparisons and interoperability studies between LCG and OSG

16 March 2005 B. Gibbard GDB Meeting 12 Grid / OSG Related Activities (3)  “Terapaths” Project  Addressing issues of contention between projects and activities with in projects on shared IP network infrastructure  Integrat LAN QoS with WAN MPLS for end-to-end network resource management  Initial tests on both LAN and WAN have been performed  Effectiveness of LAN QoS has been demonstrated on BNL backbone

16 March 2005 B. Gibbard GDB Meeting 13 Service Challenge 1  US ATLAS is very interested in Service Challenge participation  Very interest, technically capable and committed staff at Tier 1  Fully prepared in terms of hardware and software  … though no 10 Gb/sec connectivity at this time  Proposed schedules and statements of readiness continue to be “effectively” ignored by Service Challenge coordinators leading to substantial frustration  However, in conjunction with LCG Service Challenge 1, 2 RRD GridFTP servers doing bulk disk-to-disk data transfers achieved:  BNL / CERN: ~100 MByte/sec. Limited by site backbone bandwidth  Testing limited at 12 hours (night time) to not impact other BNL projects

16 March 2005 B. Gibbard GDB Meeting 14 Service Challenge 2 …  BNL SC2 Configuration  OC48 (2.5Gb/sec) site connection … but current internal 1000 Mb/sec limit  dCache head node and 4 pool nodes (1.2 TB of disk) & HPSS backend store  Using dCache-SRM  Proposed BNL SC2 Goals  Week of 14 March: Demonstrate functionality  Verify full connectivity, interoperability and performance - Completed 14 March !  Week of 21 March: Demonstrate extended period performance  SRM/SRM transfers for 12 hours at 80% of available bandwidth, MB/sec  Week of 28 March: Demonstrate robustness  SRM/SRM transfers for full week at 60% of available bandwidth, 75 MB/sec  SC3 …  Full OC48 capacity (2.5 Gb/sec) will be available at Tier 1  Limited time 100 MB/sec tape capacity if coordinated with RHIC experiments  Strong interest in Tier 2 participation