Sep 23 1999Nick Hadley DØ Prague Workshop 1  Overview  Organization  Status (brief)  Conclusion Run II Computing and Software.

Slides:



Advertisements
Similar presentations
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
Use of ROOT in the D0 Online Event Monitoring System Joel Snow, D0 Collaboration, February 2000.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Snapshot of the D0 Computing and Operations Planning Process Amber Boehnlein For the D0 Computing Planning Board.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
Nick Brook Current status Future Collaboration Plans Future UK plans.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Level 3 Muon Software Paul Balm Muon Vertical Review May 22, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
David N. Brown Lawrence Berkeley National Lab Representing the BaBar Collaboration The BaBar Mini  BaBar  BaBar’s Data Formats  Design of the Mini 
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
Erik Blaufuss University of Maryland Data Filtering and Software IceCube Collaboration Meeting Monday, March 21, 2005.
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
Workshop on Computing for Neutrino Experiments - Summary April 24, 2009 Lee Lueking, Heidi Schellman NOvA Collaboration Meeting.
Fabiola Gianotti, 31/8/’99 PHYSICS and SOFTWARE ATLAS Software Week 31/8/’99 Fabiola Gianotti Software requirements of physics groups What should Detector.
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
Elizabeth Gallas August 9, 2005 CD Support for D0 Database Projects 1 Elizabeth Gallas Fermilab Computing Division Fermilab CD Grid and Data Management.
Computing plans from UKDØ. Iain Bertram 8 November 2000.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Lee Lueking 1 The Sequential Access Model for Run II Data Management and Delivery Lee Lueking, Frank Nagy, Heidi Schellman, Igor Terekhov, Julie Trumbo,
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
DØ Online16-April-1999S. Fuess Online Computing Status DØ Collaboration Meeting 16-April-1999 Stu Fuess.
7 Feb 2000Wyatt Merritt CHEP Object Orientation & Other New Experiences at the Tevatron for Run II Experiments The Experiments (DØ and CDF):  Significant.
Online Monitoring for the CDF Run II Experiment T.Arisawa, D.Hirschbuehl, K.Ikado, K.Maeshima, H.Stadie, G.Veramendi, W.Wagner, H.Wenzel, M.Worcester MAR.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
DoE Review January 1998 Online System WBS 1.5  One-page review  Accomplishments  System description  Progress  Status  Goals Outline Stu Fuess.
Jean-Roch Vlimant, CERN Physics Performance and Dataset Project Physics Data & MC Validation Group McM : The Evolution of PREP. The CMS tool for Monte-Carlo.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Run II Review Closeout 15 Sept., 2004 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
VI/ CERN Dec 4 CMS Software Architecture vs Hybrid Store Vincenzo Innocente CMS Week CERN, Dec
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Marco Cattaneo, 3-June Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
DZero Monte Carlo Status, Performance, and Future Plans Greg Graham U. Maryland - Dzero 10/16/2000 ACAT 2000.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
GLAST LAT ProjectNovember 18, 2004 I&T Two Tower IRR 1 GLAST Large Area Telescope: Integration and Test Two Tower Integration Readiness Review SVAC Elliott.
E835 Computing at FNAL fn835 systems are old running older operating systems fn835 administration –Time to maintain –Security issues –Thanks to Gabriele.
Overview of CLAS12 Calibration
ATLAS DC2 & Continuous production
Presentation transcript:

Sep Nick Hadley DØ Prague Workshop 1  Overview  Organization  Status (brief)  Conclusion Run II Computing and Software

Sep Nick Hadley DØ Prague Workshop 2 Computing Lehman Baseline Trigger Systems DAQ System $6.0 M $0.6 M Hz Reconstruction Farms $1.6 M, Hz DC Central Analysis Systems $2.8 M Mass Storage $2.9 M Workgroup Server Workgroup Server Desktops Remote Computing Monte Carlo,... $0.8 M *

Sep Nick Hadley DØ Prague Workshop 3 Software Trigger Controls Trigger Algorithms Controls & Monitoring Run Control, Data Logging Subdetector Tasks Accelerator Interface Calibration and Hardware Databases Farm Control Software Offline Reconstruction Production Databases Analysis Tools Analysis Software MSS Control Persistency Tools Remote Systems Monte Carlos Development Environment Physicists’ Analyses

Sep Nick Hadley DØ Prague Workshop 4 History  Run I Computing at DØ  Data 2 Hz with very high efficiency  Reconstruction kept up with collection time lag of ~ 2 weeks for calibration  Successes: Robustness and timeliness of reconstruction Easy access to microDST data set of all events  Problems/Bottlenecks: Tape access (8mm drives, no significant HSM system) Network access  Reconstruction system difficult to test and verify

Sep Nick Hadley DØ Prague Workshop 5 DØ Run II Data Set Run II Parameters for DØ: Trigger rate 50 Hz (LHC / 2) Raw data event size 250 kB (LHC / 4) Data collection 600 M evts/ yr (LHC / 2) Summary event size 150 kB (LHC x 2) Physics sum’ry evt siz 10 kB (LHC ??) Total dataset size 300 TB/yr (LHC / 3) Bottom line: Computing project ~ O (Run I x 20), ~ O (LHC / 3-4) Must be accomplished w/ resources avail in 2000, not 2005!

Sep Nick Hadley DØ Prague Workshop 6 Run II Computing and Software Plan  Original Plan January 1997  Update to Plan January 1999 See D ø atwork  Computing  Reviews & Documentation  Key goals:  Maintainability, Separately testable modules  Flexibility Replaceable packages (e.g., the implementation of persistency, the graphics package, etc.)  Key decisions:  OOAD/ C ++  Interface to persistency mechanism (first implementations: DSPACK, EVPACK )

Sep Nick Hadley DØ Prague Workshop 7 R2CSP Organization Co-Leaders Hadley, Merritt Infrastructure Greenlee,Li, Prosper Algorithms Protopopescu, Womersley Monte Carlo Klima Production/ Data Access Diesburg,Lueking Online Fuess,Slattery Computing Planning Board Tightly connected to Joint Projects SoftwareTool s edm dØom framework RCP Config Man Graphics Subdetectors Calib/Align Level 3 Global Tracking Vertexing EM ID Muon ID Tau ID Jets/Missing Et Generators Geant sim C++ wrapper Param MC Procurements Farms Mass Storage Analysis Sys Software Farms ENSTORE LSF, etc. Data bases Data handling SAM Hardware controls DAQ - primary & secondary Event monitoring Control room applications

Sep Nick Hadley DØ Prague Workshop 8 Joint Offline Projects  ZOOM - C ++ Class Libraries  RIP - Reconstruction Input Pipeline (writes data into robot in FCC)  Support Databases  Using ORACLE, license negotiated through CD  Configuration Management  Using SoftRelTools with DØ interface (ctest)  Hardware Projects  Mass storage - new robot purchased  Networking - new single mode fiber to DØ  Farms - in progress (prototype purchase)  Physics analysis - 1/3 purchased, online soon, 5TB disk Note: So far, costed at or under DMNAG ests.

Sep Nick Hadley DØ Prague Workshop 9 Joint Offline Projects cont’d  Visualization  Making use of Open Inventor licenses  Physics Analysis  ROOT chosen  Evaluations of products  Storage Management  Decision to use ENSTORE, linked to SAM  Data Access  SAM Working Group - charged with producing a proof - of - principle demonstration Oct done!

Sep Nick Hadley DØ Prague Workshop 10 Computing and Software Plan: Decision Timeline 1995 Use C++ ; write data using DSPACK 1996 Begin project; CPB structure, Begin to understand large C++ system Use DØOM : isolate I/O from usr code 1997 (Jan) 60-page plan submitted (Jul) Use KAI & VC++ compilers (Oct) Use SAM (Dec) Use ZOOM products 1998 (Jan) Use ORACLE for DB needs (Mar - DAM review) Use modified exclusive streaming (May) Use ENSTORE instead of HPSS; use Open Inventor graphics (Aug - GCM review) Use tri-level analysis system; PC’s on desktop (Sept.) Physics analysis software -Root

Sep Nick Hadley DØ Prague Workshop 11 Status in a Nutshell SAM Data Handling: Version 0 released in KITS Used on farm prototype Ready for general users on d02ka Reconstruction Program: Works from MC digi hits to produce central  tracks, cal & PS clusters, muon system hits, first versions of physics objects Online System: Prototype DAQ working -- Able to log data, Cal, muon Interface to control HV, etc. Framework for data monitoring Level 3 readout software Monte Carlo: Full GEANT3.21 sim of detector Multiple interaction capability ~90K events available (bkg, sig) Fast MC under development Analysis Program: HBOOK Ntuples from MCC1 available, MCC2 by 1/00 ROOT - new Run II tool (can analyze old & new format) Level 3 Filters: Tools & Filters under development; excellent tutorial on web Databases: Oracle Event/File DB in production; used by SAM Other DBs under devel Event Disp Full det due in Oct Displays for alg dev available now

Sep Nick Hadley DØ Prague Workshop 12 Status in a Nutshell ADIC tape robot Operational 750 TB capacity In use for SAM & ENSTORE tests; MCC99-1 Central Analysis System Now:d02ka + ~ 500 GB disk Coming: 1/3 of final system Available to users by 22-Sep 64 SGI R12000’s  Gigabit Ethernet Interfaces 5 TB disk  30 TB Online Systems DecAlpha hosts Level 3 farm Linux & NT PCs for controls & monitoring EXAMINE farm Networking Upgrades FC rewiring of DAB & PKs 100 Mb to desktops Gb uplinks to main switch Done by Sept Desktops at DØ NT or Linux PCs Linux support person hired Operations strategy being worked out Project Servers? NT, LINUX Possible roles Model for offsite? Specification Tax ?? Database Servers 2 Suns here In use for Evt/File DB Farms Prototype (4 nodes) for MCC -1 Fermilab purchase this yr - 50nodes Offsite proposals for MC

Sep Nick Hadley DØ Prague Workshop 13 Status: Infrastructure  Packages to support batch reconstruction are in place and in use  edm - stable, a few changes coord. with dØom  rcp - flat file version stable  dØom - DSPACK, EVPACK done, DB design work  framework - batch, single threaded interactive  have event displays for many detectors  still to do: multi-threaded Interactive Framework, DB version of RCP, integrated event display

Sep Nick Hadley DØ Prague Workshop 14 Status: Infrastructure  Configuration management status  Golden releases ~ every three weeks  Latest release to IRIX and Linux (~200 packages) OSF1, NT (40 packages now) Switching to CTB2, SRT2. Needed for NT  Still with KAI compiler & VC ++ Debugger - TotalView a big improvement  Releases - set a new goal of one week releases  Still Working on t59  Established subsystem coordinators  t60 will use SRT2 (and this will be the only change!)

Sep Nick Hadley DØ Prague Workshop 15 Status: Monte Carlo  Full GEANT simulation  Phase 1 of MCC 99 generated 87K events  multiple interactions, better secondary tracing now implemented  Production release of DØGSTAR/simpp certified for MCC99-2  Monte Carlo presentations to Run II Physics meeting: hope for additional users/feedback/help  Fast MC, trigger simulation in progress

Sep Nick Hadley DØ Prague Workshop 16 Status: Algorithms  RECO program including first versions of all particle ID exists  First production version March 99  Algorithms group proceeding toward Oct. 30 production release  Level 3 reviewed in April 99  software infrastructure in good shape, more effort needed on releases  all basic tools being designed and coded  number of people increasing

Sep Nick Hadley DØ Prague Workshop 17 Status: Production and Data Access  SAM and ENSTORE prototypes exist and have been used to access MCC data with production RECO.  Solved numerous problems  Good progress on framework integration for Oct.1 SAM release  first 1/3 of central analysis server has arrived, available for users soon  final system 90K MIPS, 30 TB of disk  New DØ tape robot (750 GB)  installed and in use  Serial Media choice by early fall

Sep Nick Hadley DØ Prague Workshop 18 Status: Online  Version 0 of COOR, other infrastructure in place  Two DEC Alpha hosts + some PCs  Beginning to use system for commissioning tasks  Milestones driven by users’ needs  Muons and calorimeter successfully readout through the complete chain

Sep Nick Hadley DØ Prague Workshop 19 Von Rüden V Closeout June99 Joint Projects ProcurementSpending on track; well under control Management & Organization Move into understanding support phase InfrastructureGoing well Data Access Networking Continue good collaboration with CD RIP Stick to schedule Serial Media Work out a backup strategy for Mammoth II’s ENSTORE (DØ) Very pleased with excellent management SAM (DØ)Pleased with further SAM progress; develop plans to further exercise the system with users Production System Procurements (RobotWell done at the last review) Central Analysis SystemWell done FarmsAnother success

Sep Nick Hadley DØ Prague Workshop 20 Von Rüden V Closeout Von Rüden V Closeout DØ Ensure further attention to Level 3 Global Tracking should work to recruit more people, especially with prior experience in tracking Start working on calibration & alignment Overall We are “very much on track” The committee feels “rather confident” of a successful completion (And they request us not to embarrass them by proving them wrong)

Sep Nick Hadley DØ Prague Workshop 21 Schedule  The Schedule MCC99-2 Monte Carlo Production release 9/01/99 Monte Carlo evt gen complete11/15/99 RECO 3rd Production release 10/31/99 Farm production for MCC99-211/30/99 Level 3 many skeleton phys tools 7/31/99 1st prod rel. working physics tools 10/31/99 physics tools with raw data 2/28/00 Online System data logging rate test to FCC 9/01/99 integrated DAQ run11/22/99 MCC99-2 analysis (SAM, DB, ROOT)12/31/99 MCC99-2 Workshop for results~2/28/00 Detector Ready for data 1/23/00  We still have a great deal to do. More time but no extra people. Set priorities and keep focus.

Sep Nick Hadley DØ Prague Workshop 22 Summary of Computing and Software  Integration of systems is beginning!  Good reviews for software infrastructure and hardware purchasing plans  Reconstruction and Monte Carlo moving to V2.  We are well positioned but still need manpower. Hardware needs may make it difficult to add people at FNAL. Contributions of those based elsewhere very important.