MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

GWDAW 16/12/2004 Inspiral analysis of the Virgo commissioning run 4 Leone B. Bosi VIRGO coalescing binaries group on behalf of the VIRGO collaboration.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
F Fermilab Database Experience in Run II Fermilab Run II Database Requirements Online databases are maintained at each experiment and are critical for.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Large Scale and Performance Tests of the ATLAS Online Software CERN ATLAS TDAQ Online Software System D.Burckhart-Chromek, I.Alexandrov, A.Amorim, E.Badescu,
LIGO-G E ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech Argonne, May 10, 2004.
Working Group Meeting (McGrew/Toki) Discussion Items (focused on R&D software section) Goals of Simulations Simulation detector models to try Simulation.
Shuei MEG review meeting, 2 July MEG Software Status MEG Software Group Framework Large Prototype software updates Database ROME Monte Carlo.
Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges.
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Alain Romeyer - 15/06/20041 CMS farm Mons Final goal : included in the GRID CMS framework To be involved in the CMS data processing scheme.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Lower Storage projects Alexander Moibenko 02/19/2003.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
08/30/05GDM Project Presentation Lower Storage Summary of activity on 8/30/2005.
Status report from T2K-SK group Task list of this group discussion about NEUT Kaneyuki, Walter, Konaka We have just started the discussion.
1 Planning for Reuse (based on some ideas currently being discussed in LHCb ) m Obstacles to reuse m Process for reuse m Project organisation for reuse.
Workshop on Computing for Neutrino Experiments - Summary April 24, 2009 Lee Lueking, Heidi Schellman NOvA Collaboration Meeting.
Remote Control Room and SAM DH Shifts at KISTI for CDF Experiment 김현우, 조기현, 정민호 (KISTI), 김동희, 양유철, 서준석, 공대정, 김지은, 장성현, 칸 아딜 ( 경북대 ), 김수봉, 이재승, 이영장, 문창성,
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
Postgraduate Computing Lectures Applications I: Overview 1 Applications: Overview Symbiosis: Theory v. Experiment Theory –Build models to explain existing.
Elizabeth Gallas August 9, 2005 CD Support for D0 Database Projects 1 Elizabeth Gallas Fermilab Computing Division Fermilab CD Grid and Data Management.
BES III Computing at The University of Minnesota Dr. Alexander Scott.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
CDMS Computing Project Don Holmgren Other FNAL project members (all PPD): Project Manager: Dan Bauer Electronics: Mike Crisler Analysis: Erik Ramberg Engineering:
Linda R. Coney – 5 November 2009 Online Reconstruction Linda R. Coney 5 November 2009.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
DoE Review January 1998 Online System WBS 1.5  One-page review  Accomplishments  System description  Progress  Status  Goals Outline Stu Fuess.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
TDAQ Experience in the BNL Liquid Argon Calorimeter Test Facility Denis Oliveira Damazio (BNL), George Redlinger (BNL).
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Notes in preparation by the Torino group Sara BolognesiDT Cosmic Meeting 01/11/2007  MTCC note:  calibration section  DQM section  Internal note about.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
CWG13: Ideas and discussion about the online part of the prototype P. Hristov, 11/04/2014.
1 Tracker Software Status M. Ellis MICE Collaboration Meeting 27 th June 2005.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
Analysis Model Zhengyun You University of California Irvine Mu2e Computing Review March 5-6, 2015 Mu2e-doc-5227.
David Lawrence JLab May 11, /11/101Reconstruction Framework -- GlueX Collab. meeting -- D. Lawrence.
Minos Computing Infrastructure Arthur Kreymer 1 ● Young Minos issues: – /local/stage1/minosgli temporary reprieve – need grid shared area – Retiring.
Compute and Storage For the Farm at Jlab
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
BaBar Transition: Computing/Monitoring
BESIII data processing
LHC experiments Requirements and Concepts ALICE
Computing Infrastructure for DAQ, DM and SC
Presentation transcript:

MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants: P.Spentzouris (15%) CD, CD liaison Steve Brice (50%), PPD, offline Chris Green (100%), user, sysadmin & general computing issues Timeline: end of run (2005 ???), end of run+2 years (estimated) for offline related issues.

FNALU linux workstations plus Condor AFS space PNFS space CDCVS server (NFS) Code repository DAQ Enstore X Project area(210G) products & libraries (ups/upd) Project area: code, data, scratch Code development & binaries Analysis Calibration constants Distribution & setup scripts FARMS X Nearline online monitoring 100 nodes x 100% x 1 week/month

MiniBooNE Computing Summary: In the past most of the effort went into defining/implementing the MiniBooNE Computing model, and utilizing available resources. Status: ➔ Online/nearline implemented & fully functional ➔ Offline still some development (algorithms, calibrations) most important issues: Monte Carlo tunning, which requires farm running Data cache implementation

 Disappearance Expt. Overview Detector Beam MC Cross Section Models Detector MC Apply  Cuts  Disappearance Expt. Result  Disappearance Expt. Result Apply  Cuts All events  candidates from data   candidates from MC   MC events   flux   cross- sections Operations Algorithms Calibration/Det MC Beam Cross-sections Group Responsibility Exotics FARM usage -> x 2

e Counting Expt. Overview Detector Beam MC Cross Section Models Detector MC Apply e Cuts e Counting Expt. Result Detector MC Apply e Cuts All events e candidates from data  e misID e intrinsic   MC events  e MC events   flux   e flux   cross- sections  e cross- sections Operations Algorithms Calibration/Det MC Beam Cross-sections Group Responsibility Exotics FARM usage -> x 2

System components (status) AFS Project area – experiment -(OK) Enstore & usage monitoring - experiment – (OK) will need more tapes than anticipated disk cache system CVS repository -experiment- (OK) FARMS (100 nodes) -CD- (OK) ✗ usage will go up x ~2 FNALU -CD- (not utilized for analysis) Local workstations (~50 CPUs/Condor) -experiment- (OK) Networking: Detector, control room, WH10W -CD- (OK) work related to data servers DAQ -experiment, CD (hardware)- (OK) Online/nearline -experiment, CD- new online machines (CD) additional disk, 320 GB (experiment) support/maintenance of above: [no issues]

Software ● Offline -experiment, CD- (Phase 1 OK) mixed language (f77, C++), Root, mySQL Controls, Event Model, Design -CD- Pattern Recognition, Reconstruction – experiment ➔ will need help with “data set” level reconstruction and consulting for bug fixes, etc Online/nearline -experiment- (OK) re-uses most of offline components, but not Root (Tcl/Tk and Hbook) Event display -CD- (separate project) DAQ (real time linux) -experiment- (OK) have “friendly terms”, no-CD-commitment, consulting agreement Simulation: G3 (detector) and new G4 (beam) (OK), real issue is production and x-section models

Disk Cache ● Need both read and write cache ● Would like to have (most) of our data set on disk ● 1 TB dcache not safe for write-cache. Options: – Buy hardware, add to existing cache system – Buy hardware, configure & administer ● Experiment selected the self-configure option – Up to 3 TB disk, up to 10 servers. Cris Green negotiated installation at Feynman – One server exists, move from 10 th floor to FCC – Hardware ordered or existing (network)

Tape usage: ~20 tapes/week, up from initial estimates due to new triggers