F Computing at Fermilab Fermilab Onsite Review Stephen Wolbers August 7, 2002.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Kian-Tat Lim Offline Computing November 12 th, LCLS Offline Data Management.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
ATLAS users and SLAC Jason Nielsen Santa Cruz Institute for Particle Physics University of California, Santa Cruz SLAC Users Organization Meeting 7 February.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
CMS Software and Computing FNAL Internal Review of USCMS Software and Computing David Stickland Princeton University CMS Software and Computing Deputy.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
November 15, 2000US CMS Tier2 Plans Matthias Kasemann1 US CMS Software and Computing Tier 2 Center Plans Matthias Kasemann Fermilab DOE/NSF Baseline Review.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
1 Grid Related Activities at Caltech Koen Holtman Caltech/CMS PPDG meeting, Argonne July 13-14, 2000.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2.
Workshop on Computing for Neutrino Experiments - Summary April 24, 2009 Lee Lueking, Heidi Schellman NOvA Collaboration Meeting.
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
1 DØ Grid PP Plans – SAM, Grid, Ceiling Wax and Things Iain Bertram Lancaster University Monday 5 November 2001.
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
DOE/NSF Quarterly review January 1999 Particle Physics Data Grid Applications David Malon Argonne National Laboratory
Projects, Tools and Engineering Patricia McBride Computing Division Fermilab March 17, 2004.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
 Accelerator Simulation P. Spentzouris Accelerator activity coordination meeting 03 Aug '04.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Computing at Fermilab David J. Ritchie Computing Division May 2, 2006.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
Fabric for Frontier Experiments at Fermilab Gabriele Garzoglio Grid and Cloud Services Department, Scientific Computing Division, Fermilab ISGC – Thu,
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Hall D Computing Facilities Ian Bird 16 March 2001.
LQCD Computing Project Overview
Gene Oleynik, Head of Data Storage and Caching,
Grid site as a tool for data processing and data analysis
LQCD Computing Operations
Patrick Dreher Research Scientist & Associate Director
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

f Computing at Fermilab Fermilab Onsite Review Stephen Wolbers August 7, 2002

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 2 Outline Introduction Run 2 computing and computing facilities US CMS Computing GRID Computing Lattice QCD Computing Accelerator Simulations Conclusion/Future

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 3 Introduction Computing at Fermilab: –Serves the Scientific Program of the laboratory –Participates in R&D in computing –Prepares for future activities –Contributes to nationwide scientific efforts –Consists of an extremely capable computing facility With a very strong staff

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 4 Introduction (2) Fermilab Computing’s emphasis has changed to include more participation in non-FNAL and non-HEP efforts. Fermilab plans to leverage efforts in computing and expand the “outlook” and involvement with computer scientists, universities and other laboratories. Fermilab is in an excellent position to do this because of its involvement in Run 2 (CDF and D0), CMS, lattice QCD, SDSS, and other programs; and because of its strong computing facilities.

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 5 Run 2 Computing/Facilities Run 2 Computing is a major activity of Fermilab Computing Division The Computing Division also provides support for and facilities for: –Fixed-Target experiments –miniBooNE and MINOS –SDSS and Auger –US CMS –R&D efforts for future experiments and accelerators –Lattice QCD/theory –Data Acquisition Central Farms Use

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 6 Run 2 Computing/Facilities CD also supports and maintains facilities for the laboratory as a whole: –Networks –Mass Storage/tape management –System administration –Windows 2000 –PREP –Operations –Computer security –Electronics Engineering –Web –Help for Beams Division D0 CDF Fixed Tgt TD SDSS CD PPD BD General Network Management MRENESNet Ameritech Off Site On Site 34 kb/ s ana log & 128 kb/ s ISD N 155 Mb/s off- site links 1-2 Mb/ s ADS L LSS BSS ESHS Core network Network Architecture

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 7 Run 2 Computing/Facilities The activities map into facilities at the laboratory: –Networks –Mass Storage robotics/tapedrives –Large computing farms –Databases –Operations –Support

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 8 Run 2 Computing/Facilities

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 9 Run 2 Computing/Facilities Run 2 computing is a huge effort. Fermilab CD uses Run 2 joint projects to maximize the effectiveness of the CD resources (especially people): –Enstore storage management system –Reconstruction farms procurement, management and job scheduling –Oracle database management –C++ code development –Compilers and debuggers and code build systems –ROOT for analysis –SAM data handling system –Analysis hardware purchases

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 10 D0 computing systems MBps 100+ MBps 400+ MBps “data-logger” “farm” “central-analysis” Enstore Mass Storage System “linux-analysis -clusters” “linux-build-cluster” “clueD0” ~100 desktops “d0-test” and “sam-cluster”

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 11 Run 2 Computing/Futures Data Volume doubles every 2.4 years Lots of Data!

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 12 Status of Run 2 computing Total datasets ~ 500 TB /yr /expt (including raw, reconstructed,derived and simulated data) Both experiments are logging data reliably and moving data in and out of mass storage on a scale well beyond Run I capability (several TB’s / day) Both experiments are reconstructing data approximately in real time with reasonable output for start-up analyses Both experiments are providing analysis CPU to users/day

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 13 Run 2 Computing D0 has a concept of Regional Analysis Centers (RAC’s): –Distributed analysis around the globe (arbitrary imagined distribution) UO UA CINVESTAV Rice FSU LTU UTA

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 14 Run 2 Computing Fermilab Off-Site ESNET Traffic

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 15 US CMS Computing Fermilab is the host lab of U.S. CMS Fermilab hosts the project management for the U.S. CMS Software and Computing Program in DOE –L1 project manager (Lothar Bauerdick) –L2 projects User Facilities (Tier 1 and Tier 2 centers) Core Application Software

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 16 US CMS Computing Fermilab and US CMS has been extremely successful and influential in CMS software and computing –Influence on CMS software Framework Persistency CMS distributed production environment –Tier 1 and Tier 2 regional centers Tier 1 at Fermilab Tier 2 in a small set of universities

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 17 US CMS Computing Tier 1/Tier 2 Centers and Prototypes

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 18 US CMS Computing Contribution to CMS is substantial Spring production 2002 –8.4 million events fully simulated 50% in US –29 TB processed 14 TB in US

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 19 US CMS Computing The US CMS Software and Computing project has been reviewed and “baselined” Funding has been lower than hoped for and this has led to: –Slower ramp-up of staff –Smaller hardware purchases –Fewer projects being completed It is hoped that funding will improve so that the necessarily people and facilities can be hired/built to properly prepare for CMS data-taking and physics analysis

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 20 GRID Computing Fermilab is participating in many GRID initiatives: –ppdg (DOE SciDAC) Particle physics data grid Fermilab, SLAC, ANL, BNL, JLAB, Caltech, UCSD, Wisconsin, SDSC –Griphyn Grid physics network –iVDGL International virtual grid laboratory GRID activities have been a very natural outgrowth of distributed computing of the large collaborations.

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 21 GRID Computing Working on GRID computing projects has many benefits for Fermilab computing –Connection with a leading technology in computing –Connection with computer scientists in universities and labs in the US and around the world –Interaction and participation in GRID initiatives and software development. –Experiment participation in GRID testbeds and production facilities D0, CMS, SDSS, CDF

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 22 GRID Computing SciDAC funding for Fermilab is devoted to: –D0 GRID work –CMS production activities –Authentication Issues –Mass Storage access across the GRID SDSS is working on projects associated with –SciDAC –Griphyn –iVDGL

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 23 GRID Computing Fermilab CD is involved in many projects: –File Replication, transfers and data management –Scheduling of processing and analysis applications –Monitoring –Authentication –Application and application support HEP experiments provide excellent testbeds for the GRID middleware: –HEP experiments have data –HEP experiments have people all over the world that want to access and analyze that data

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 24 Lattice QCD SciDAC-funded Lattice QCD facility at Fermilab –Part of a nationwide coordinated effort to provide facilities and software for lattice QCD calculations (Computational Infrastructure for Lattice Gauge Theory) Fermilab Jefferson Lab Columbia/RIKEN/BNL (QCD on a chip, QCDOC) –Fermilab is pursuing PC-based solutions along with software from the MILC collaboration.

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 25 Lattice QCD Improvements from Lattice QCD Calculations (B. Sugar)

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 26 Lattice QCD Fermilab’s contributions –Large PC clusters Acquisition (128 nodes now) Operation and maintenance –Interconnect technologies Commercial/Myrinet Ethernet R&D –Software infrastructure Joint effort with MILC collaboration

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 27 Lattice QCD Muon Lab Room To Expand Complex Interconnects 128 Nodes

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 28 Lattice QCD Goals –Larger clusters 128 more nodes within 6 months (0.5 Tflops) Substantially larger within years (>6 Tflops) –Software framework and support –Facility for research on Lattice problems

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 29 Accelerator Simulations Fermilab is collaborating on a SciDAC program of accelerator simulations Other collaborators: –BNL, LANL, LBNL, SLAC, SNL –Stanford, UCLA, UCD, USC, Maryland Plan to work on many topics –Designing next-generation accelerators –Optimizing existing accelerators –Developing new accelerator technologies

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 30 Accelerator Simulations Incorporate Ionization Cooling modules in Beam Dynamics Codes Develop code with 3D space- charge capabilities for circular accelerators (in collaboration with BD theory) Model & study the FNAL Booster. Beam data comparison with simulation: almost unique in this business! (in collaboration with Booster group) 1PE: 500 particles, 460 sec 128 PE: 500K particles, 4400 sec Ionization Cooling Channel No space charge with space charge Beam in Beam out Px X turn number R M S b u n c h (  s) Longitudinal PhS evolution - Simulation + data no current 10 mA 20 mA 42 mA

f AUGUST 7, 2002FERMILAB ONSITE REVIEW 31 Summary Fermilab Computing is in an excellent position to support the physics program of the laboratory and to participate in and foster partnerships with our collaborators, computer scientists and other institutions. By participating in these programs and by building working relationships we are in a position to better extend our capabilities in: –Distributed and GRID computing (important for most of our collaborations) –Lattice QCD –Beam simulations