U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

15 May 2006Collaboration Board GridPP3 Planning Executive Summary Steve Lloyd.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Chapter 9: Moving to Design
Architectural Design Establishing the overall structure of a software system Objectives To introduce architectural design and to discuss its importance.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
© 2005 Prentice Hall14-1 Stumpf and Teague Object-Oriented Systems Analysis and Design with UML.
U.S. ATLAS Physics and Computing Budget and Schedule Review John Huth Harvard University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven.
Assessment of Core Services provided to USLHC by OSG.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects LBNL, Berkeley, California.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
October LHCUSA meeting BNL Bjørn S. Nilsen Update on NSF-ITR Proposal Bjørn S. Nilsen The Ohio State University.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Atlas CAP Closeout Thanks to all the presenters for excellent and frank presentations Thanks to all the presenters for excellent and frank presentations.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Software Project Management (SEWPZG622) BITS-WIPRO Collaborative Programme: MS in Software Engineering SECOND SEMESTER /1/ "The content of this.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
U.S. ATLAS Computing Facilities Bruce G. Gibbard Brookhaven National Laboratory Mid-year Review of U.S. LHC Software and Computing Projects NSF Headquarters,
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
DPS/ CMS RRB-T Core Software for CMS David Stickland for CMS Oct 01, RRB l The Core-Software and Computing was not part of the detector MoU l.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Clouds , Grids and Clusters
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Readiness of ATLAS Computing - A personal view
LQCD Computing Operations
New strategies of the LHC experiments to meet
Nuclear Physics Data Management Needs Bruce G. Gibbard
Collaboration Board Meeting
What is a Grid? Grid - describes many different models
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory November 14-17, 2000

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 2 Facilities Presentations  Overview – B. Gibbard  Requirements  Overall plan & rationale  Network considerations  Risk & contingency  Details of Tier 1 – R. Baker  Tier 1 configuration  Schedule  Cost analysis  Grid Software and Tier 2’s – R. Gardner (Yesterday)  Grid plans  Tier 2 configuration  Schedule & cost analysis

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 3 US ATLAS Computing Facilities  …to enable effective participation by US physicists in the ATLAS physics program !  Direct access to and analysis of physics data sets  Simulation, re-reconstruction, and reorganization of data as required to complete such analyses  Facilities procured, installed and operated  …to meet U.S. “MOU” obligations to ATLAS  Direct IT support (Monte Carlo generation, for example)  Support for detector construction, testing, and calibration  Support for software development and testing

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 4 Setting the Scale  For US ATLAS  Start from ATLAS Estimate of Requirements & Model for Contributions  Adjust for US ATLAS perspective (experience, priorities and facilities model)  US ATLAS facilities must be adequate to meet all reasonable U.S. ATLAS computing needs  Specifically, the U.S. role in ATLAS should not be constrained by a computing shortfall; it should be enhanced by computing strength

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 5 ATLAS Estimate (1)  New Estimate Made As Part of Hoffmann LHC Computing Review  Current draft “ATLAS Computing Resources Requirements”, V1.5 by Alois Putzer  Assumptions for LHC / ATLAS detector performance

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 6 ATLAS Estimate (2)  Assumptions for ATLAS data

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 7 Architecture  Remote Tier 2 Computing Centers  Institutional Computing Facilities  Individual Desk Top Systems  Hierarchy of Grid Connected Distributed Computing Resources  Primary Atlas Computing Centre at CERN  Tier 0 & Tier 1  Remote Tier 1 Computing Centers

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 8 Functional definitions  Tier 0 Center at CERN  Storage of primary raw and ESD data  Reconstruction of raw data  Re-procession of raw data  Tier 1 Centers (At CERN & in some major contributing countries)  Simulation and reconstruction of simulated data  Selection and redefinition of AOD based on complete locally stored ESD set  Group and User level analysis  Tier 2 Centers  Not described in the current ATLAS document, function as a reduced scale Tier 1’s, bringing analysis capability effectively closer to individual users

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 9 Required Tier 1 Capacities (1)  Individual ATLAS Tier 1 Capacities in 2006

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 10 Required Tier 1 Capacities (2)  Expect 6 Such Remote Tier 1 Centers  USA, FR, UK, IT, +2 to be determined  ATLAS Capacity Ramp-up Profile  Perhaps too much too early, at least for US ATLAS facilities … see Rich Baker’s talks

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 11 US ATLAS Facilities  Requirements Related Considerations  Analysis is the dominant Tier 1 activity  Experience shows analysis will be compute capacity limited  Scale of US involvement is larger than other Tier 1 countries … by authors, by institutions, by core detector fraction (x 1.7, x 3.1, x 1.8) so US will require more analysis than a single canonical Tier 1  US Tier 1 must be augmented by additional capacity (particularly for analysis)  Appropriate US facilities level is ~ twice that of a canonical Tier 1

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 12 US ATLAS Facilities Plan  US ATLAS will have a Tier 1 Center, as defined by ATLAS, at Brookhaven  The Tier 1 will be augment by 5 Tier 2 Centers whose aggregate capacity is comparable to that of a canonical Tier 1  This model will …  exploit high performance US regional networks  leverage existing resource at sites selected as Tier 2’s  establish an architect which supports the inclusion of institutional resources at other (non-Tier 2) sites  focus on analysis: both increasing capacity and encouraging local autonomy and therefore presumably creativity within the analysis effort

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 13 Tier 1 (WBS )  Tier 1 is now operational with significant capacities at BNL  Operating in coordinating with the RHIC Computing Facility (RCF)  Broad commonality in requirements between ATLAS and RHIC  Long term synergy with RCF is expected  Personnel & cost projections for US ATLAS facilities are base on recent experience at RCF … see Rich Baker’s talk  Technical choices, beyond simple price/performance criteria, must address issues of maintainability, manageability and evolutionary flexibility  Current default technology choices used for costing will be adjusted to exploit future technical evolution (toward more cost effective new options)

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 14 Tier 1 (continued)  Full Tier 1 Functionality Includes...  Hub of US ATLAS Computing GRID  Dedicated High Bandwidth Connectivity to CERN, US Tier 2’s, etc.  Data Storage/Serving  Primary site for caching/replicating data from CERN & other data needed by US ATLAS  Computation  Primary Site for any US Re-reconstruction (perhaps only site)  Major Site for Simulation & Analysis  Regional support plus catchall for those without a region  Repository of Technical Expertise and Support  Hardware, OS’s, utilities, other standard elements of U.S. ATLAS  Network, AFS, GRID, & other infrastructure elements of WAN model

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 15 GRID R&D (WBS )  Transparent, Optimized Use of Wide Area Distributed Resources by Means of “Middleware”  Significant Dependence on External ( to both US ATLAS and ATLAS ) Projects for GRID Middleware  PPDG, GriPhyN, European DataGrid  Direct US ATLAS Role  Specify, Adapt, Interface, Deploy, Test and Use … Rob Gardner’s talks (yesterday)

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 16 Tier 2 (WBS )  The standard Tier 2 configuration will focus on the CPU and cache disk required for analysis  Some Tier 2’s will be custom configured to leverage particularly strong institutional resources of value to ATLAS (the current assumption is that there will be 2 HSM capable sites)  Initial Tier 2 selections (2 sites) will be base on their ability to contribute rapidly and effectively to development and test of this Grid computing architecture

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 17 Network  Tier 1 Connectivity to CERN and to Tier 2’s is Critical to Facilities Model  Must have adequate bandwidth  Must eventually be guaranteed and allocable bandwidth (dedicated and differentiate)  Should grow with need; OC12 to CERN in 2005 with OC48 needed in 2006  While the network is an integral part of the US ATLAS plan, its funding is not part of the US ATLAS budget

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 18 WAN Bandwidth Requirements

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 19 Capacities of US ATLAS Facilities

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 20 Risks and Contingency  Have Develop Somewhat Conservative but Realistic Plans and Now Expect to Build Facilities to Cost  Contingency takes the form of reduction in scale (design is highly scaleable)  Option to trade one type of capacity for another is retained until very late (~80% of procured capacity occurs in ’06)  Risk Factors  Requirements may change  Price/performance projections may be too optimistic  Tier 2 funding remains less than certain  Grid projects are complex and maybe be less successful than hoped

16 November, 2000 B. Gibbard US ATLAS Computing Facilities Overview 21 Summary  The US ATLAS Facilities Project Has Three Components  Tier 1 Center at BNL … as will be detailed in Rich Baker’s talk  5 Distributed Tier 2 Centers …  Network & Grid tying them together …  Project’s Integral Capacity Meets the ATLAS/LHC Computing Contribution Guidelines and Permits Effective Participation by US Physicists in the ATLAS Research Program  Approach Will Make Optimal Use Of Resources Available to US ATLAS and ( in a Funding Limited Context ) Will Be Relatively Robust Against Likely Project Risks as was discussed in Rob Gardner’ talk } as was discussed in Rob Gardner’ talk