Workfest Goals Develop the Tools for CDR Simulations HDFast HDGEANT Geometry Definitions Remote Access Education of the rest of the collaboration Needs.

Slides:



Advertisements
Similar presentations
Alternate Software Development Methodologies
Advertisements

June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Chapter 22 Systems Design, Implementation, and Operation Copyright © 2012 Pearson Education, Inc. publishing as Prentice Hall 22-1.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
LIGO-G E ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech Argonne, May 10, 2004.
SVX Software Overview Sasha Lebedev VTX meeting 09/07/ SVX Software web page:
Shuei MEG review meeting, 2 July MEG Software Status MEG Software Group Framework Large Prototype software updates Database ROME Monte Carlo.
Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges.
Multi-agent Research Tool (MART) A proposal for MSE project Madhukar Kumar.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
The GlueX Collaboration Meeting October 4-6, 2012 Jefferson Lab Curtis Meyer.
Simulations Progress at Regina ➔ Event generation with genr8 – output in ascii format ➔ Conversion to either HDFast input (stdhep) or HDGeant input (hddm)
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
HPS Online Software Discussion Jeremy McCormick, SLAC Status and Plans.
Hall D Offline Software Performance and Status 12 GeV Software Review III February 10, 2015 Mark Ito Hall D Offline Software1.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
LIGO-G9900XX-00-M ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech.
GlueX Software Status April 28, 2006 David Lawrence, JLab.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
David N. Brown Lawrence Berkeley National Lab Representing the BaBar Collaboration The BaBar Mini  BaBar  BaBar’s Data Formats  Design of the Mini 
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
JANA and Raw Data David Lawrence, JLab Oct. 5, 2012.
R.T. Jones, Newport News, May The GlueX Simulation Framework GEANT4 Tutorial Workshop Newport News, May 22-26, 2006 R.T. Jones, UConn Monte Carlo.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
November 2013 Review Talks Morning Plenary Talk – CLAS12 Software Overview and Progress ( ) Current Status with Emphasis on Past Year’s Progress:
PPDG update l We want to join PPDG l They want PHENIX to join NSF also wants this l Issue is to identify our goals/projects Ingredients: What we need/want.
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
The CMS Simulation Software Julia Yarba, Fermilab on behalf of CMS Collaboration 22 m long, 15 m in diameter Over a million geometrical volumes Many complex.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Mantid Stakeholder Review Nick Draper 01/11/2007.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Simulation Status for Year2 Running Charles F. Maguire Software Meeting May 8, 2001.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
GlueX Computing GlueX Collaboration Meeting – JLab Edward Brash – University of Regina December 11 th -13th, 2003.
TRIUMF HLA Development High Level Applications Perform tasks of accelerator and beam control at control- room level, directly interfacing with operators.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Simulation Production System Science Advisory Committee Meeting UW-Madison March 1 st -2 nd 2007 Juan Carlos Díaz Vélez.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
May 23, 2007ALICE DOE Review - Computing1 ALICE-USA Computing Overview of Hard and Soft Computing Resources Needed to Achieve Research Goals 1.Calibration.
1 GlueX Software Oct. 21, 2004 D. Lawrence, JLab.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Simulation Production System
MICE Computing and Software
ALICE analysis preservation
US CMS Testbed.
ATLAS DC2 & Continuous production
Presentation transcript:

Workfest Goals Develop the Tools for CDR Simulations HDFast HDGEANT Geometry Definitions Remote Access Education of the rest of the collaboration Needs for CDR Data Model Other Items Software Web Page Distributed Meetings Long Term Simulation Effort – Goals & Design

Workfest Minutes Speaker – Richard Jones Subject - Status of GEANT 10 Months until the CDR is due Software  Design  Create  Publish  Monitor Development

HDFAST MCFAST  fairly mature (monitoring development). Reasonably stable at this point. GEANT 3 Designed and in prototype How to compare with MCFAST GEANT 4 Forseen, probably have a compatible geometry definition (GEANT4 might change). Event Generators  cwrap, genR8, weight Facilities (JLab, Regina, IU, FSU and UConn.) Expected Simulation Projects – PWA, Detector Optimizations, Background Studies

HDFAST – I/O Summary Genr8  ascii  (ascii2stdhep)  stdhep Command & Geometry  HDFast  Root(RDT) or stdhep  ascii Using Root (Root Data Tree) Root available at

UConn Cluster Pentium 450/800 – 36 processors Rackmount (2U cases) Dual CPU – 512 MByte Switch Dual CPU’s Disk pvfs Disk raid 80 Mb/s 35 Mb/s

I n t e r n e t 2 U Conn Computing Cluster

Mantrid – Indiana University Processing/Storage ModelD 32 processors in 16 nodes GB disks (1.44 TB total) /home/HallD Node 00 /data0/HallD /data1/HallD Has slides Has a prototype web based access system for event generation (cwrap). du/~teige/tests/cwrap_request. html du/~teige/tests/cwrap_request. html

U Regina – 50 Alpha Cluster 50 Nodes PBS 500 MHz Alphas 9 Gbyte disk per node Note a 500 MHz alphas is roughly comparable to a 1 GHz Pentium III. Has slides See: brash/openspace.ps brash/openspace.ps brash/connectivity.ps brash/connectivity.ps

Second Day: To Do Running HDFAST and HDGEANT. Paul gave a demo. Decide on data model  how information is moved from package to package. (Data Model Working Group) Web Site (Working Group) Features required to integrate clusters

Hall D Computing Web Site Goals: Everyone can contribute without excessive site management. XML based description of documents. Automatic searching and organization tools. Still need overview documents.

Contents: Hall D Website Everyone maintains their own site Everyone has a summary page and link to Hall D computing resources and searching tools Links and searches will be managed automatically Everyone contributes documents to the Hall D computing archive describing their computing activities Each document has an XML metadata description of what it contains

How To: Hall D Website Within your website create a single XML metadata document describing all of your documents. Let me know where it is. Publish DTD so local sites can be validated ( dtds/website.dtd).

How To: Hall D Website Hall D Computing Design Page Design Larry Dennis May 21, 2001 grid computing acquisition analysis simulations This is final word on Hall D computing.

Geant I/O Package Binary Stream Events InEvents Out Control In stderr stdoutLog metadata

GEANT 3 – Richard’s Plan Produce a standard geometry See Use the geometry for Monte Carlo, event display, logical geometry model for use in analysis.

Monte Carlo Data Model - Input <event <interaction <vertex … <particle … … Conceptual Model Logical Model Physical Model  Open Start with an I/O API Some others exist

Monte Carlo Data Model - Output … <detector <BarrelDC <ring <sector <strawhit <eloss <time … Conceptual Model Logical Model Physical Model  Open Start with an I/O API Monte Carlo Event Generator  Interactions Simulation  Real data DAC  Digitized Data Calibration Translator  Hits:

DOE/NSF Initiatives & Resources Groups Working on Software CMU U Conn U Regina FSU IU FIU JLab (Watson, Bird, Heyes, Hall D) RPI ODU Glasgow Raw events hits Tracks/clusters Particles How much of this can be automated?

Larry -- Things To Do Give everyone information about ITR and SciDAC Get Web Site Started Design (Elliott, Scott) Prototype Grid Nodes (Ian, Elliott)

Richard -- Things To Do Input Interface to GEANT from event generators, XML input. Finish Geometry Prototype Output Interface for GEANT Prototype Document and Publish the above

Scott -- Things To Do Web access for Mantrid Interfaces from generators to/from XML

Paul -- Things To Do Maintain HDFast Teach people how to use HDFAST

Greg -- Things To Do DTD for event structure DTD for cwrap input

Ed -- Things To Do Full OSF support for genr8, HDFAST, GEANT3, translators Web Interface for UR Farm Barrel Calorimeter Studies with GEANT3

Elliott -- Things To Do Explore CODA Event Format Assist Greg with Event DTD Explore GEANT4 Hall D Computing Design Prototype for Grid Nodes Remote Collaboration Tools

Design Focus Get the job done Minimize the effort required to perform computing Fewer physicists Lower development costs Lower hardware costs Keep it simple Provide for ubiquitous access and participation – improve participation in computing

Goals for the Computing Environment 1. The experiment must be easy to conduct (coded software people  two person rule). 2. Everyone can participate in solving experimental problems – no matter where they are located. 3. Offline analysis can more than keep up with the online acquisition. 4. Simulations can more than keep up with the online acquisition. 5. Production of tracks/clusters from raw data and simulations can be planned, conducted, monitored, validated and used by a group. 6. Production of tracks/clusters from raw data and simulations can be conducted automatically with group monitoring. 7. Subsequent analysis can be done automatically if individuals so choose.

Goal #1: Easy to Operate 100 MB/s raw data. Need an estimate of designed good event rate to set online trigger performance Automated system monitoring Automated slow controls Automated data acquisition Automated online farm Collaborative environment for access to experts Integrated problem solving database links current to past problems and solutions Well defined procedures Good training procedures

Goal #2: Ubiquitous expert participation Online system information available from the web. Collaborative environment for working with online team. Experts can control systems from elsewhere when data acquisition team allows or DAQ inactive.

Goal #3: Concurrent Offline Production Offline Production (raw events  tracks/clusters) can be completed in the same length of time as is required for data taking (including detector and accelerator down time). This includes: Calibration overhead. Multiple passes through the data (average of 2). Evaluation of results. Dissemination of results

Goal #4: Concurrent Simulations Simulations can be completed in the same length of time as is required for data taking (including detector and accelerator down time). This includes: Simulation planning. Systematic studies ( up to 5-10 times as much data as is required for experimental measurements). Production processing of simulation results. Dissemination of results.

Goal #5: Collaborative computing Production processing and simulations can be planned by a group. Multiple people can conduct, validate, monitor, evaluate and use produced data and simulations without unnecessary duplication. A single individual or a large group can manage appropriate scale tasks effectively.

Goal #6: Automated computing Production processing and simulations can conducted automatically without intervention. Progress is reported automatically. Quality checking can be performed automatically. Errors in automatic processing are automatically flagged.

Goal #7: Extensibility Subsequent analysis steps can be done automatically if individuals so choose. The computational management system can be extended to include any Hall D computing tasks.