Presentation is loading. Please wait.

Presentation is loading. Please wait.

U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.

Similar presentations


Presentation on theme: "U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory."— Presentation transcript:

1 U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory November 14-17, 2000

2 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 2 Facilities Presentations  Overview – B. Gibbard  Requirements  Overall plan & rationale  Network considerations  Risk & contingency  Details of Tier 1 – R. Baker  Tier 1 configuration  Schedule  Cost analysis  Grid Software and Tier 2’s – R. Gardner (Yesterday)  Grid plans  Tier 2 configuration  Schedule & cost analysis

3 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 3 US ATLAS Computing Facilities  …to enable effective participation by US physicists in the ATLAS physics program !  Direct access to and analysis of physics data sets  Simulation, re-reconstruction, and reorganization of data as required to complete such analyses  Facilities procured, installed and operated  …to meet U.S. “MOU” obligations to ATLAS  Direct IT support (Monte Carlo generation, for example)  Support for detector construction, testing, and calibration  Support for software development and testing

4 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 4 Setting the Scale  For US ATLAS  Start from ATLAS Estimate of Requirements & Model for Contributions  Adjust for US ATLAS perspective (experience, priorities and facilities model)  US ATLAS facilities must be adequate to meet all reasonable U.S. ATLAS computing needs  Specifically, the U.S. role in ATLAS should not be constrained by a computing shortfall; it should be enhanced by computing strength

5 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 5 ATLAS Estimate (1)  New Estimate Made As Part of Hoffmann LHC Computing Review  Current draft “ATLAS Computing Resources Requirements”, V1.5 by Alois Putzer  Assumptions for LHC / ATLAS detector performance

6 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 6 ATLAS Estimate (2)  Assumptions for ATLAS data

7 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 7 Architecture  Remote Tier 2 Computing Centers  Institutional Computing Facilities  Individual Desk Top Systems  Hierarchy of Grid Connected Distributed Computing Resources  Primary Atlas Computing Centre at CERN  Tier 0 & Tier 1  Remote Tier 1 Computing Centers

8 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 8 Functional definitions  Tier 0 Center at CERN  Storage of primary raw and ESD data  Reconstruction of raw data  Re-procession of raw data  Tier 1 Centers (At CERN & in some major contributing countries)  Simulation and reconstruction of simulated data  Selection and redefinition of AOD based on complete locally stored ESD set  Group and User level analysis  Tier 2 Centers  Not described in the current ATLAS document, function as a reduced scale Tier 1’s, bringing analysis capability effectively closer to individual users

9 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 9 Required Tier 1 Capacities (1)  Individual ATLAS Tier 1 Capacities in 2006

10 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 10 Required Tier 1 Capacities (2)  Expect 6 Such Remote Tier 1 Centers  USA, FR, UK, IT, +2 to be determined  ATLAS Capacity Ramp-up Profile  Perhaps too much too early, at least for US ATLAS facilities … see Rich Baker’s talks

11 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 11 US ATLAS Facilities  Requirements Related Considerations  Analysis is the dominant Tier 1 activity  Experience shows analysis will be compute capacity limited  Scale of US involvement is larger than other Tier 1 countries … by authors, by institutions, by core detector fraction (x 1.7, x 3.1, x 1.8) so US will require more analysis than a single canonical Tier 1  US Tier 1 must be augmented by additional capacity (particularly for analysis)  Appropriate US facilities level is ~ twice that of a canonical Tier 1

12 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 12 US ATLAS Facilities Plan  US ATLAS will have a Tier 1 Center, as defined by ATLAS, at Brookhaven  The Tier 1 will be augment by 5 Tier 2 Centers whose aggregate capacity is comparable to that of a canonical Tier 1  This model will …  exploit high performance US regional networks  leverage existing resource at sites selected as Tier 2’s  establish an architect which supports the inclusion of institutional resources at other (non-Tier 2) sites  focus on analysis: both increasing capacity and encouraging local autonomy and therefore presumably creativity within the analysis effort

13 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 13 Tier 1 (WBS 2.3.1 )  Tier 1 is now operational with significant capacities at BNL  Operating in coordinating with the RHIC Computing Facility (RCF)  Broad commonality in requirements between ATLAS and RHIC  Long term synergy with RCF is expected  Personnel & cost projections for US ATLAS facilities are base on recent experience at RCF … see Rich Baker’s talk  Technical choices, beyond simple price/performance criteria, must address issues of maintainability, manageability and evolutionary flexibility  Current default technology choices used for costing will be adjusted to exploit future technical evolution (toward more cost effective new options)

14 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 14 Tier 1 (continued)  Full Tier 1 Functionality Includes...  Hub of US ATLAS Computing GRID  Dedicated High Bandwidth Connectivity to CERN, US Tier 2’s, etc.  Data Storage/Serving  Primary site for caching/replicating data from CERN & other data needed by US ATLAS  Computation  Primary Site for any US Re-reconstruction (perhaps only site)  Major Site for Simulation & Analysis  Regional support plus catchall for those without a region  Repository of Technical Expertise and Support  Hardware, OS’s, utilities, other standard elements of U.S. ATLAS  Network, AFS, GRID, & other infrastructure elements of WAN model

15 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 15 GRID R&D (WBS 2.3.2.1 - 2.3.2.6 )  Transparent, Optimized Use of Wide Area Distributed Resources by Means of “Middleware”  Significant Dependence on External ( to both US ATLAS and ATLAS ) Projects for GRID Middleware  PPDG, GriPhyN, European DataGrid  Direct US ATLAS Role  Specify, Adapt, Interface, Deploy, Test and Use … Rob Gardner’s talks (yesterday)

16 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 16 Tier 2 (WBS 2.3.2.7 - 2.3.2.11 )  The standard Tier 2 configuration will focus on the CPU and cache disk required for analysis  Some Tier 2’s will be custom configured to leverage particularly strong institutional resources of value to ATLAS (the current assumption is that there will be 2 HSM capable sites)  Initial Tier 2 selections (2 sites) will be base on their ability to contribute rapidly and effectively to development and test of this Grid computing architecture

17 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 17 Network  Tier 1 Connectivity to CERN and to Tier 2’s is Critical to Facilities Model  Must have adequate bandwidth  Must eventually be guaranteed and allocable bandwidth (dedicated and differentiate)  Should grow with need; OC12 to CERN in 2005 with OC48 needed in 2006  While the network is an integral part of the US ATLAS plan, its funding is not part of the US ATLAS budget

18 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 18 WAN Bandwidth Requirements

19 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 19 Capacities of US ATLAS Facilities

20 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 20 Risks and Contingency  Have Develop Somewhat Conservative but Realistic Plans and Now Expect to Build Facilities to Cost  Contingency takes the form of reduction in scale (design is highly scaleable)  Option to trade one type of capacity for another is retained until very late (~80% of procured capacity occurs in ’06)  Risk Factors  Requirements may change  Price/performance projections may be too optimistic  Tier 2 funding remains less than certain  Grid projects are complex and maybe be less successful than hoped

21 16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 21 Summary  The US ATLAS Facilities Project Has Three Components  Tier 1 Center at BNL … as will be detailed in Rich Baker’s talk  5 Distributed Tier 2 Centers …  Network & Grid tying them together …  Project’s Integral Capacity Meets the ATLAS/LHC Computing Contribution Guidelines and Permits Effective Participation by US Physicists in the ATLAS Research Program  Approach Will Make Optimal Use Of Resources Available to US ATLAS and ( in a Funding Limited Context ) Will Be Relatively Robust Against Likely Project Risks as was discussed in Rob Gardner’ talk } as was discussed in Rob Gardner’ talk


Download ppt "U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory."

Similar presentations


Ads by Google