Presentation is loading. Please wait.

Presentation is loading. Please wait.

CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),

Similar presentations


Presentation on theme: "CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),"— Presentation transcript:

1 CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1), S. Sawada (2), H. Hamagaki (3) RIKEN, RBRC (1),KEK (2), CNS (3) Presented on 17th October 2001 at First Joint meeting of the Nuclear Physics Division of APS and JPS (Hawaii 2001)

2 RIKEN CCJ : Overview  Scope  Center for the analysis of RHIC Spin Physics  Principal site of computing for PHENIX simulation  PHENIX CC-J is aiming at covering most of the simulation tasks of the whole PHENIX experiments  Regional computing center in Japan  Size  Data amount: handing 225 TB /year  Disk Storage : ~ 20 TB, Tape Library: ~500 TB  CPU performance : 13 k SPECint95 (~300 Pentium 3/4 CPU)  Schedule  R&D for the CC-J started in April ‘98 at RBRC in BNL  Construction began in April ‘99 over a three years period  CCJ started operation in June 2000  CCJ will reach full scale in 2002

3 Concept of RIKEN CCJ System

4  Annual Data amount DST150 TB micro-DST 45 TB Simulated Data 30 TB Total 225 TB  Hierarchical Storage System Handle data amount of 225 TB/year Total I/O bandwidth: 112 MB/s HPSS system  Disk storage system 15 TB capacity All RAID system I/O bandwidth: 520 MB/s System Requirement for CCJ  CPU ( SPECint95) Simulation 8200 Sim. Reconst 1300 Sim. ana. 170 Theor. Mode 800 Data Analysis 2000 Total 12470 SPECint95 ( = 120K SPECint2000)  Data Accessibility (exchange 225 TB /year)  Data Duplication Facility (tape cartridge)  Wide Area Network (IMnet/APAN/ESnet)  Software Enviroment  AFS, Objectivity/DB, Batch Queuing System

5 Requirements as a Regional Computing Center s Software Environment Software environment of the CCJ needed be compatible to the PHENIX software environment at the RHIC Computing Facility (RCF) at BNL AFS (/afs/rhic) Remote access over the network is very slow and unstable (AFS sever : BNL) daily mirroring at CCJ and accessed via NFS from Linux farms Objectivity/DB Remote access over the network is very slow and unstable (AMS server: BNL) Local AMS server at CCJ (regularly update DB) Batch Queuing System: Load Sharing Facility (LSF 4.1) RedHat Linux 6.1 (Kernel 2.2.19 with nfsv3 enabled), gcc-2.95.3 etc. Veritas File System (Journaling file system on Solaris): free from fsck for ~TB disk s Data Accessibility Need to exchange data of 225 TB/year to BNL RCF Most part of the data exchange is carried out with SD3 tape cartridges Data duplicating Facility at both (10 MB/s performance by airbone) Some part of the data exchange is carried out over the Wide Area Network (WAN) CC-J will use Asia-Pacific Advanced Network ( APAN ) for US-Japan connection http://www.apan.net/ APAN has currently ~100 Mbps bandwidth for Japan-US(STAR TAP) connection 3 MB/s transfer performance was obtained using bbftp protocol

6 Plan and current status of RIKEN CCJ

7 Current configuration of CCJ

8 Components of RIKEN CCJ

9 File Transfer over WAN s RIKEN - (Imnet) - APAN (~100 Mbps) -startap- ESnet - BNL sRound Trip Time(RTT) for RIKEN-BNL :170 ms RFC1323 (TCP Extensions for high performance, May 1992) describes the method of using large TCP window-size (> 64 KB) FTP performance : 290 KB/s (64 KB windowsize), 640 KB/s(512KB window) Large TCP-window size is necessary to obtain high-transfer rate BBFTP (http://doc.in2p3.fr/bbftp/) Software designed to quickly transfer files accross a wide area network. It has been written for the babar experiment in order to transfer big files transferring data through several parallel tcp streams RIKEN-BNL bbftp Performance (10 tcp streams for each session) 1.5 MB/s (1-sessinon) 3 MB/s (3-sessions) MRTG graph during bbftp transfer between RIEKN and BNL

10 Analysis for the first PHENIX experiment with CCJ  Quark Matter 2001 conference (Jan 2001) 21 papers using CCJ / total 33 PHENIX papers  Simulation 100k event simulation/reconst. (Hayashi) DC Simulation (Jane) Muon Moc Data Challenge (MDC) (Satohiro) 100k Au+Au (June 2001)  DST Production 1 TB of Raw Data (Year-1) was transferred via WAN from BNL to RIKEN PHENIX official DST (v03) production (Sawada) (Dec. 2000) - about 40% of DSTs produced at CCJ and transferred to BNL RCF  Detector calibration/simulation TOF -(Kiyomochi,Chujo, Suzuki) EMCal - (Torii,Oyama, Matsumoto) RICH -(Akiba,Shigaki, Sakaguchi)  Physics/simulation Single electron spectrum - (Akiba, Hachiya) Hadoron particle ratio/spectrum ( Ohnishi, Chujo) Photon [π 0 spectrum] (Oyama, Sakaguchi)

11 CCJ Operation  Operation, maintenance and development of CC-J are carried out under the charge of the CCJ Planning and Coordinate Office (PCO).

12  Construction of the CCJ started in 1999  The CCJ operation started in June 2000 at 1/3 scale. 43 user’s account created.  Hardware/upgrade and software improvement : 224 Linux CPU(90k Specint2000), 21 TB disk, 100TB HPSS Tape library Data Duplicating Facility at BNL RCF started operation in Dec 2000.  PHENIX Year -1 experiment : summer in 2000 Analysis of the PHENIX first experiment with CCJ Official DST (v03) Production Simulations Detector calibration/simulation Physics analysis/simulation Quark Matter 2001 conference (Jan 2001) 21 papers using CCJ / total 33 PHENIX papers  PHENIX Year 2 (2001) Run started in August 2001  Spin experiments at RHIC will start in December 2001 !! Summary


Download ppt "CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),"

Similar presentations


Ads by Google