Presentation is loading. Please wait.

Presentation is loading. Please wait.

U.S. ATLAS Project Overview John Huth Harvard University LHC Computing Review FNAL November 2001.

Similar presentations


Presentation on theme: "U.S. ATLAS Project Overview John Huth Harvard University LHC Computing Review FNAL November 2001."— Presentation transcript:

1 U.S. ATLAS Project Overview John Huth Harvard University LHC Computing Review FNAL November 2001

2 November 01 John Huth, LHC Computing 2 Outline  International and U.S. ATLAS  Organization  U.S. ATLAS  External Groups  Project Management Plan  Milestones  Status  Software  Facilities  Grid efforts  Physics  Funding Profile  Impact

3 November 01 John Huth, LHC Computing 3 International ATLAS  Computing Oversight Board  Computing Steering Group  Matrix of detector/task orientation  PBS structure gives tasks, schedules, resource loading  Maps directly onto U.S. ATLAS WBS  Planning officer is now Torre Wenaus (U.S. ATLAS SW Mgr.)  Software deliverables delineated in Software Agreements  Major milestones associated with releases, data challenges  Data Challenge coordinator: Gilbert Poulard (CERN)

4 November 01 John Huth, LHC Computing 4 ATLAS Detector/Task matrix Offline Coordinator ReconstructionSimulationDatabaseChair N. McCubbin D. Rousseau K. Amako D. Malon Inner Detector D. Barberis D. Rousseau F. Luehring S. Bentvelsen Liquid Argon J. Collot S. Rajagopalan M. Leltchouk S. Simion/ R. Sobie Tile Calorimeter A. Solodkov F. Merritt A. Solodkov T. LeCompte Muon To Be Named J.F. Laporte A. Rimoldi S. Goldfarb LVL 2 Trigger/ Trigger DAQ S. George S. Tapprogge M. Weilers A. Amorim Event Filter F. Touchard M. Bosman Physics Coordinator: F.Gianotti Chief Architect: D.Quarrie

5 November 01 John Huth, LHC Computing 5 International ATLAS Computing Org. simulationreconstructiondatabasecoordinator QC groupsimulation reconstruction databaseArch. team Event filter Technical Group National Comp. Board Comp. Steering Group Physics Comp. Oversight Board Detector system

6 November 01 John Huth, LHC Computing 6 Recent Events  Progress toward coherent, integrated effort  First Software Agreement Signed! (Control/framework)  Second one in progress (QA/AC)  Data Challenge Coordinator named (Gilbert Poulard)  Lund physics meeting (Lund Athena release)  ARC Report  Endorsement of Athena  Upcoming Data Challenge 0  Continuity test – November Athena release  Personnel Changes  D. Malon now solo data management leader  Helge Meinhard – planning, moves to IT Division  Now replaced by Torre Wenaus

7 November 01 John Huth, LHC Computing 7 Project Core SW FTE

8 November 01 John Huth, LHC Computing 8 FTE Fraction of Core SW

9 November 01 John Huth, LHC Computing 9 U.S. ATLAS Goals  Deliverables to International ATLAS and LHC projects  Software  Control/framework (SW agreement signed)  Portion of data management  Event Store  Collaboratory tools  Detector subsystem reconstruction  Grid integration  Computing resources devoted to data analysis, simulation  Tier 1, Tier 2 centers  Support of U.S. ATLAS Physicists  Computing resources  Support functions (librarian, nightly builds, site support)

10 November 01 John Huth, LHC Computing 10 U.S. ATLAS Project

11 November 01 John Huth, LHC Computing 11 Project Management Plan  New version: extensive revisions from last year  Description of institutional MOU’s  Two draft Inst. MOU’s exist  Liaison list  Performance metrics established  Personnel effort - FTE  Hardware – fraction of turn-on functional per year  Change control  Reporting  Quarterly reports  Transition to “Research Program” in FY 07

12 November 01 John Huth, LHC Computing 12 U.S. ATLAS WBS Structure  2.1Physics  Support of event generators  Data challenge support in U.S.  2.2 Software  Core Software (framework, database)  Subsystem efforts  Training  2.3 Facilities  Tier 1  Tier 2, infrastructure (networking)  2.4 Project Management

13 November 01 John Huth, LHC Computing 13 U.S. ATLAS Developments  Athena (control/framework)  Lund release done  DC 0 release  Incorporation of G4 interface  Database  Effort augmented  Coordination of Oracle,Obj, Root evaluations  Facilities  Ramp delayed by funding profile, DC preparation reduced scope  Common grid plan worked out  Modest personnel ramp – BNL SW/Fac/ANL  Librarian support, nightly builds at BNL (from CERN)

14 November 01 John Huth, LHC Computing 14 External Groups  iVDGL funding (Tier 2 personnel, Hardware) approved  But 50% cut in hardware relative to original planning  PPDG effort in progress  ITR funding of Indiana (grid telemetry)  Integrated planning on software, facilities for grids  Liaisons named in PMP  GriPhyN/iVDGL – R. Gardner (J. Schopf CS liaison)  PPDG – T. Wenaus (J. Schopf CS liaison)  EU Data grid – C. Tull  HEP Networking – S. McKee

15 November 01 John Huth, LHC Computing 15 Software  Migration from SRT to CMT begun  Effort redirection, but long term benefit  Upcoming Nov. Release of Athena  Support of DC 0  Funding shortfall impacts  Shift of D. Day (USDP, Python scripting) – postdoc hire to fill  FY 03 delay hire at BNL possible – loss of ROOT expertise  Data management architecture proposed (non product specific)  Root I/O service  G4 Integration into Athena  Development of “pacman” for deployment of software (BU) at remote sites

16 November 01 John Huth, LHC Computing 16 Facilities Schedule  LHC start-up projected to be a year later  2005/2006  2006/2007  30% facility in 06  100% facility in 07  ATLAS Data Challenges (DC’s) have, so far, stayed fixed  DC0 – Nov/Dec 2001 – 10 5 events  Software continuity test  DC1 – Feb/Jul 2002 – 10 7 events  ~1% scale test  Data used for US ATLAS Grid testbed integration tests  DC2 – Jan/Sep 2003 – 10 8 events  ~10% scale test  A serious functionality & capacity exercise  A high level of US ATLAS facilities participation is deemed very important

17 November 01 John Huth, LHC Computing 17 Facilities  Tier 1 particularly hard hit by budget shortfall  Delays in hiring  Scalable online storage prototype work delayed approx. 7 mos.  DC2 capability reduced relative to plan (1 vs. 5%)  Small increments ($300k) can help substantially  Year end funding of $284k from DOE (Aug 01)  Major usage of Tier 1 for shielding calculations  Anticipate major usage in DC’s and in grid tests  Examination of tape vs. disk for event store at start of data taking  Tier 2  Selection of first prototype centers (I.U., B.U.)  iVDGL funding of prototype hardware  Deployment of SW on testbed sites in progress

18 November 01 John Huth, LHC Computing 18 CPU Capacity (kSi95)

19 November 01 John Huth, LHC Computing 19 US ATLAS Persistent Grid Testbed Calren Esnet, Abilene, Nton Esnet, Mren UC Berkeley LBNL-NERSC Esnet NPACI, Abilene Brookhaven National Laboratory Indiana University Boston University Argonne National Laboratory U Michigan Oklahoma University Abilene Prototype Tier 2s HPSS sites

20 November 01 John Huth, LHC Computing 20 Grid Efforts/Physics  Many sources of effort/shared  GriPhyN/iVDGL/PPDG/EU activities/New CERN mgmt.  Common U.S. ATLAS plan  Use existing tools as much as possible  Use existing platforms as much as possible  Gain experience in  Replica catalog  Metadata description  Deployment/release of tools  Philosophy is to gain expertise, not await a grand international synthesis  Physics: support hire for generator interface, data challenge

21 November 01 John Huth, LHC Computing 21 Networking  Networking is a crucial component for the success of the grid model of distributed computing  This has not been included as part of project funding profile  Agency guidance  Nevertheless, it must be planned for  Transatlantic planning group report (H. Newman, L. Price)  Tier 1- Tier 2 connectivity requirements  See talk by S. McKee  Scale and requirements are being established  Funding sources must be identified

22 November 01 John Huth, LHC Computing 22 Major Milestones

23 November 01 John Huth, LHC Computing 23 Comments on Budgeting  Agencies so far have worked hard to come up the profile, but, given budget issues, this had been difficult, have come up short  Construction project borrowing  We cannot plan based on our wishes, but rather realistic expectations  Current budgeting profile  Increase in grid activities – long term benefit/short term redirection of effort  Need to factor in networking costs for Tier 1-Tier 2 connections (note: Transatlantic report) – via external funds, but must be budgeted  Relief in form of overall NSF funding for “research program”  Overall profile for M+O, upgrades and computing  Major components in computing:  Tier 2 sites, grid integration with ATLAS software  SW Professionals located at CERN (hires through universities)

24 November 01 John Huth, LHC Computing 24 Funding Guidance

25 November 01 John Huth, LHC Computing 25 Budget Profile by Item

26 November 01 John Huth, LHC Computing 26

27 November 01 John Huth, LHC Computing 27 FTE Profile

28 November 01 John Huth, LHC Computing 28 FTE by Category in 02

29 November 01 John Huth, LHC Computing 29 FTE’s in 07

30 November 01 John Huth, LHC Computing 30 Matching of Profiles

31 November 01 John Huth, LHC Computing 31 Risks  Software  Loss of expertise in control/framework (scripting)  New hire as mitigation  Loss of Ed Frank (U. Chicago) – data management  Delay of new hire at BNL – ROOT persistency expertise  Support questions in tools (e.g. CMT)  Facilities  Slowed ramp-up in personnel, hardware  Facility preparation for DC2 implies reduced scale  Grid  Some shift of effort from core areas into grid developments – following the development of integrated model of comp. centers

32 November 01 John Huth, LHC Computing 32 NSF Proposal  Covers Computing, upgrades and M+O  Computing:  3 Software FTE’s – located at CERN (hire by University)  Alleviates shortfall of 2 FTE, covers remaining portion in out years  Physics support person  Support of generator interfaces, data challenges  Main source of Tier 2 funding  Tier 2 hardware  Personnel  Common (w/ US CMS) team to debug last mile networking problems  Alleviates shortfalls in the program  N.B. this also frees up DOE funds for labs, allowing a better Tier 1 ramp, preparation for data challenges.

33 November 01 John Huth, LHC Computing 33

34 November 01 John Huth, LHC Computing 34 Summary  Much progress on many fronts  Funding profile is still an issue  Personnel ramp  Ramp of facility  Some possible solutions  Funds for loan payback are incremental  “Research Program” NSF proposal is necessary  Progress in International collaboration  Software agreements  The single biggest help from the committee would be a favorable recommendation on the computing request in the NSF proposal  In addition, endorsement of proposed project scope, schedule, budgets and management plan.


Download ppt "U.S. ATLAS Project Overview John Huth Harvard University LHC Computing Review FNAL November 2001."

Similar presentations


Ads by Google