Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.

Slides:



Advertisements
Similar presentations
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Advertisements

Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Grids: Why and How (you might use them) J. Templon, NIKHEF VLV T Workshop NIKHEF 06 October 2003.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
Ichiro Adachi ACAT03, 2003.Dec.021 Ichiro Adachi KEK representing for computing & DST/MC production group ACAT03, KEK, 2003.Dec.02 Belle Computing System.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Management System of Event Processing and Data Files Based on XML Software Tools at Belle Ichiro Adachi, Nobu Katayama, Masahiko Yokoyama IPNS, KEK, Tsukuba,
Fermi National Accelerator Laboratory SC2006 Fermilab Data Movement & Storage Multi-Petabyte tertiary automated tape store for world- wide HEP and other.
KLOE Computing Update Paolo Santangelo INFN LNF KLOE General Meeting University of Rome 2, Tor Vergata 2002, December
Spending Plans and Schedule Jae Yu July 26, 2002.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Software Overview Akiya Miyamoto KEK JSPS Tokusui Workshop December-2012 Topics MC production Computing reousces GRID Future events Topics MC production.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Overview of the Belle II computing
Belle II Physics Analysis Center at TIFR
Update on Plan for KISTI-GSDC
News and computing activities at CC-IN2P3
ILD Ichinoseki Meeting
Using an Object Oriented Database to Store BaBar's Terabytes
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii

1 Belle’s computing goal Data processing  3 months to reprocess entire data accumulated so far using all of KEK computing resources  efficient resources  flexibility Successful ( I think at least )  all data processed and used for analysis for summer conferences ( good or bad? )  Example: D s J (2317) from David Brown’s CHEP04 talk  BaBar discovery paper: Feb 2003  Belle: confirm D s J (2317) : Jun 2003  Belle: discover B  D s J (2317)D: Oct 2003  BaBar: confirm B  D s J (2317)D: Aug 2004 “How can we keep computing power ?” also validate software reliability

2 Present Belle computing system Athron 1.67GHz 50TB disk 50TB IDE disk 155 TB disk + Tape Library 1.29PB S-AIT Xeon 2.8GHz Tape Library 500TB DTF2 Sparc 0.5GHz HSM 4TB disk Tape Library 120TB DTF2 Xeon 0.7GHz 8TB disk Pen3 1.26GHz Xeon 3.2GHz Athron 1.67GHz Xeon 3.4GHz 2 major components under rental contract start from 2001 Belle own system

3 Computing resources evolving Purchased what we needed as we accumulated integrated luminosities so far Rental system contract  Expired on 2006 Jan.  Has to be replaced to new one CPU HSM volumeDisk capacity GHzTB Processing power at 2005: 7fb -1 /day  5fb -1 /day at 2004

4 New rental system Rental period x 6 data Specifications  Based on Oide’s luminosity scenario  6-year contract to 2012 Jan  Middle of bidding process 40,000 specCINT2000_rates compute servers at (1)PB tape(disk) storage system with extensions fast enough network connection to read/write data at the rate of 2-10GB/s (2 for DST, 10 for physics analysis) User friendly and efficient batch system that can be used collaboration wide  In a single 6-year lease contract we hope to double the resource in the middle, assuming Moore’s law in the IT commodity market

5 Lessons and remarks Data size and access Mass storage  Hardware  Software Compute server

6 Data size & access Possible consideration  rawdata  rawdata size  integ. lum  1 PB for 1 ab -1 (at least)  Read once or twice/year  Keep at archive  compact beam data for analysis (“mini-DST”)  60 TB for 1 ab -1  Access frequently and (almost) randomly  Easy access preferable  MC  180 TB for 1 ab -1   3 beam data in Belle’s law  Read all data files by most of users Belle rawdata/yr(TB) Integ.luminosity/yr(fb -1 ) Detector & accelerator upgrades can change this slope on disk on disk? where to go?

7 Mass storage : hardware Central system in the coming computing Lesson from Belle  We have been using SONY DTF drive technology since  SONY DTF2…No roadmap to future development. Dead-end. SONY’s next technology choice is S-AIT.  Testing a tape library of S-AIT from  Already recorded in 5000 DTF2 tapes. We have to move… 2Gbit FC switch The front-end disks 18 dual Xeon PC servers with two SCSI channels 8(10) connecting one (400)GB IDE disk RAID system Total capacity is 56(96)TB The back-end S-AIT system SONY Petasite tape library system in 7 rack wide space main system (12 drives) + 5 cassette consoles with total capacity of 1.3 PB (2500 tapes)  vendor’s trend  cost & time

8 Mass storage : software 2nd lesson  We are moving from direct tape access to hierarchical storage system  We have learned that automatic file migration is quite convenient.  But we need a lot of capacity so that we do not need operators to mount tapes  Most of users go through all of (MC) data available in HSM, and each access from user is random, not controlled at all.  Each access requires tape reloading to copy data onto disk.  # of reloading for a tape is hitting its limit !  in our usage, HSM not archive, but a big cache  need optimization in both of HSM control & user I/O  huge disk may help ?

9 Compute server 40,000 specCINT2000_rate at 2006 Assume Moor’s law is still valid for coming years Bunch of PC’s is difficult for us to manage  At Belle, limited human resources  Belle software distribution “Space” problem  One floor of Tsukuba exp. hall B3(~10m  20m)  2002 cleared and flooring  2005 full ! No more space !  Air condition system should be equipped  “electricity” problem:~500W for dual 3.5GHz CPUs  Moor’s law is not enough to solve this problem

10 Software Simulation & reconstruction  Geant4 framework for Super Belle detector underway  Simulation with beam background is being done  For reconstruction, robustness against BG can be a key.

11 Grid Distributed computing at Belle  MC production carried out at 20 sites outside KEK  ~45 % of MC events produced at remote institutes from 2004  Infrastructure  Super-SINET 1Gbps to major universities inside Japan  Need improvements for other sites Grid  Should help us  Effort with KEK computing research center  SRB(storage resource broker)  Gfarm at Grid technology research center, National Institute of Advanced Industrial Science and Technology(AIST)

12 Summary Computing for physics output  Try keeping the present goal Rental system  Renew from 2006 Jan Mass storage  PB scale: not only size but also type of accesses  Technology choice and vendor’s roadmap CPU  Moor’s law alone does not solve “space” problem Software  Geant4 simulation underway Grid  Infrastructure getting better in Japan (SuperSINET)