September 26, 2003K User's Meeting1 CCJ Usage for Belle Monte Carlo production and analysis –CPU time: 170K hours (Aug.1, 02 ~ Aug.22, 03)

Slides:



Advertisements
Similar presentations
Buffers & Spoolers J L Martin Think about it… All I/O is relatively slow. For most of us, input by typing is painfully slow. From the CPUs point.
Advertisements

Converting ASGARD into a MC-Farm for Particle Physics Beowulf-Day A.Biland IPP/ETHZ.
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
1 Data Management D0 Monte Carlo needs The NIKHEF D0 farm The data we produce The SAM data base The network Conclusions Kors Bos, NIKHEF, Amsterdam Fermilab,
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina, L.Lueking,
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
ATLAS DC2 seen from Prague Tier2 center - some remarks Atlas sw workshop September 2004.
Belle MC Production on Grid 2 nd Open Meeting of the SuperKEKB Collaboration Soft/Comp session 17 March, 2009 Hideyuki Nakazawa National Central University.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Building a Real Workflow Thursday morning, 9:00 am Greg Thain University of Wisconsin - Madison.
9 th Weekly Operation Report on DIRAC Distributed Computing YAN Tian From to
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
CC-J Monthly Report Shin’ya Sawada (KEK) for CC-J Working Group
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
1 L.Didenko Joint ALICE/STAR meeting HPSS and Production Management 9 April, 2000.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Liverpool Experience of MDC 1 MAP (and in our belief any system which attempts to be scaleable to 1000s of nodes) broadcasts the code to all the nodes.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
May 10, 2000PHENIX CC-J Updates1 PHENIX CC-J Updates - Preparation For Opening - N.Hayashi / RIKEN May 10, 2000 PHENIX Computing
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
Software framework and batch computing Jochen Markert.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production.
Systems Software / The Operating System CSC October 14, 2010.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
Compute and Storage For the Farm at Jlab
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
Overview of the Belle II computing
Belle II Physics Analysis Center at TIFR
Solid State Disks Testing with PROOF
Analysis trains – Status & experience from operation
Experiences with Large Data Sets
SAM at CCIN2P3 configuration issues
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Zhongliang Ren 12 June 2006 WLCG Tier2 Workshop at CERN
Simulation use cases for T2 in ALICE
Your great subtitle in this line
Haiyan Meng and Douglas Thain
R. Graciani for LHCb Mumbay, Feb 2006
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
DØ MC and Data Processing on the Grid
Current Grid System in Belle
Gridifying the LHCb Monte Carlo production system
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
Lecture 3: Main Memory.
MonteCarlo production for the BaBar experiment on the Italian grid
Building an Elastic Batch System with Private and Public Clouds
The LHCb Computing Data Challenge DC06
Presentation transcript:

September 26, 2003K User's Meeting1 CCJ Usage for Belle Monte Carlo production and analysis –CPU time: 170K hours (Aug.1, 02 ~ Aug.22, 03) MC production 145K hours Analysis 25K hours –1TB HD for work space to keep generated MC (mDST) files to keep histogram files for analysis –1 Linux box for DB server K. Hasuko (RIKEN) CCJ User’s Meeting September 26, 2003

K User's Meeting2 Monte Carlo Production RIKEN duty: 10 fb -1 equivalent (=120M events) / year Job procedure –Copy input files (generator) from KEK to CCJ –Submit jobs –Check output files (mDST) –Send mDST files to KEK Typical MC job –I/O file size: in 10MB, out 200MB –CPU usage: 3.5sec/event  6k events = 5.8 hours –Max memory 360MB; max swap 1GB –Sending output files: scp, 0.7MB/sec  285sec belle_sim queue –30-50 CPUs –~ 200 jobs/day (1M events/day)  40GB output/day Submission schedule –Depends on experiment schedule –Oct 03 – Apr 04

September 26, 2003K User's Meeting3 Analysis Job procedure –make analysis (skimmed) histograms at KEK farm –Copy the histogram files to CCJ –Merge files; detailed analysis at CCJ –Keep out put files at 1TB HDD Schedule –Constant; basically small size of jobs –Produce specific (toy) MC at CCJ –Copy data at KEK to CCJ HPSS (3.5T for data; 10T for MC) –Full data and MC analysis (150fb -1  30K hours)