An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
PetaByte Storage Facility at RHIC Razvan Popescu - Brookhaven National Laboratory.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
PHENIX Computing Sangsu Ryu. RHIC PHENIX 실험의 목적 QCD 에 의하면 쿼크는 독립적으로 존재할 수 없 고 다른 쿼크와 합쳐 color singlet 상태인 강입자 로 존재 => 색 속박 현상 Lattice QCD: 극한 상태의 핵 물질에서.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Shigeki Misawa RHIC Computing Facility Brookhaven National Laboratory Facility Evolution.
Jerome Lauret RCF Advisory Committee Meeting The Data Carousel what problem it’s trying to solve the data carousel and the grand challenge the bits and.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
KLOE Computing Update Paolo Santangelo INFN LNF KLOE General Meeting University of Rome 2, Tor Vergata 2002, December
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
Grand Challenge and PHENIX Report post-MDC2 studies of GC software –feasibility for day-1 expectations of data model –simple robustness tests –Comparisons.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
CC-J Monthly Report Shin’ya Sawada (KEK) for CC-J Working Group
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
Disk Farms at Jefferson Lab Bryan Hess
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
RAL Site report John Gordon ITD October 1999
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
May 10, 2000PHENIX CC-J Updates1 PHENIX CC-J Updates - Preparation For Opening - N.Hayashi / RIKEN May 10, 2000 PHENIX Computing
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Overview of PHENIX Muon Tracker Data Analysis PHENIX Muon Tracker Muon Tracker Software Muon Tracker Database Muon Event Display Performance Muon Reconstruction.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
PC Farms & Central Data Recording
Nuclear Physics Data Management Needs Bruce G. Gibbard
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National University, Daegu, Korea

RHIC  Configurations: Two concentric superconducting magnet rings (3.8Km circumference) with 6 interaction regions  Ion Beams: Au + Au (or p + A)  s = 200 GeV/nucleon luminosity = 2  cm -2 s -1  Polarized proton: p + p  s = 500 GeV luminosity = 1.4  cm -2 s -1  Experiments: PHENIX, STAR, PHOBOS, BRAHMS

PHENIX Experiment  Physics Goals Search for Quark-Gluon Plasma Hard Scattering Processes Spin Physics  Experimental Appratus PHENIX Central Arms (e , , hadrons) PHENIX Muon Arms (   )

PHENIX Data Size Peak DAQ bandwidth in PHENIX is 20 MB/sec.  Ion Beams (Au + Au) 1. Minimum bias events (0.16 MB/event): Raw event rate = 1400 Hz (224 MB/sec) Trigger rate = 124 Hz (20 MB/sec) 2. Central events (0.4MB/event): Trigger rate = 50 Hz (20 MB/sec)  Polarized proton (p + p) All events (25 KB/event): Raw event rate = 250 KHz (6250 MB/sec) Trigger rate = 800 Hz (20 MB/sec)

RCF RHIC Computing Facility (RCF) provides computing facilities for four RHIC experiments (PHENIX, STAR, PHOBOS, BRAHMS). Typically RCF gets ~ 30 MB/sec (or a few TB/day) from the PHENIX counting house only through Gigabit network. Thus RCF is required to have complicated data storage and data handling systems. RCF has established an AFS cell for sharing files with remote institutions and NFS is the primary means through which data is made available to the users at the RCF. The similar facility is established at RIKEN (CC-J) as a regional computing center for PHENIX. Compact but effective system is also installed at Yonsei.

PHENIX Computing Environment Linux OS with ROOT PHOOL (PHenix Object Oriented Library): C++ class library (PHOOL) based on top of ROOT GNU build system Local Disks Database Reconstruction Farm Calibrations & Run Info Raw Data DST Data Big Disk HPSS Mining & Staging Analysis Jobs Tag DB Counting House Pretty Big Disk Raw Data

Data Carousel using HPSS To handle annual volume of 500TB from PHENIX only, High Performance Storage System (HPSS) is used as Hierarchical Storage system with tape robotics and disk system. IBM computer (AIX4.2) organizes the request of users to retrieve data without chaos. PHENIX used ten 9840 and eight 9940 drives from STK. The tape media costs about $1/GB.

“ORNL” software carousel server mySQL database filelist HPSS tape HPSS cache pftp rmine0x client pftp CAS data mover CAS local disk NFS disk Data carousel architecture

Disk Storage at RCF The storage resources are provided by a group of SUN NFS servers with 60TB of SAN based RAID arrays backed by a series of StorageTek tape libraries managed by HPSS. Vendors of storage disks are Data Direct, MTI, ZZYZX, and LSI.

Linux Farms at RCF CRS (Central Reconstruction Server) farms are dedicated to the processing of raw event data to generate reconstructed events (strictly batch systems without being available for general users). CAS (Central Analysis Server) farms are dedicated to the analysis of the reconstructed events (mix of interactive and batch systems). The LSF, the Load Sharing Facility, manages batch jobs. There are about 600 machines (dual CPU, IGB memory, 30GB local disks) at RCF and about 200 machines are allocated for PHENIX.

offline software technology analysis framework  C++ class library (PHOOL) based on top of ROOT base class for analysis modules “tree” structure for holding, organizing data can contain raw data, DSTs, transient results uses ROOT I/O database  using Objectivity OODB for calibration data, file catalog, run info, etc. expecting ~100 GB/year of calibration data code development environment  based heavily on GNU tools (autoconf, automake, libtool)

PHENIX CC-J The PHENIX CC-J at RIKEN is intended to serve as the main site of computing for PHENIX simulations, a regional Asia computing center for PHENIX, and as a center for SPIN physics analysis. Network switches required to connect HPSS servers, the SMP data servers, and the CPU farms at RCF. In order to exchange data between RCF and CC-J, a proper bandwidth of the WAN between RCF and CC-J is required. CC-J has CPU farms of 10K SPECint95, tape storage of 100 TB, disk storage of 15 TB, tape I/O of 100 MB/sec, disk I/O of 600MB/sec, and 6 SUN SMP data server units.

Comparable mirror image into Yonsei by “Explicit” copy of the remote system Usage of the local cluster machines Similar operation environment (same OS, and similar hardware spec) 1. Disk sharing through NFS One installation of analysis library and sharing by other machines 2. Easy upgrade and management Local clustering Unlimited network resources between the cluster machines by using 100Mbps (No NFS lagging and instant X-display as an example) Current number of the cluster machines = 2 (2CPUs) + 2 (as RAID) File transfers from RCF Software update by copying shared libraries (once/week, takes less than about 1 hour) Raw data copy using “scp” or BBFTP (~1GB/day) Situations at YONSEI

Yonsei Computing Resources Yonsei Linux boxes for PHENIX analysis use 4 desktop boxes in a firewall (Pentium III/IV) Linux (RedHat 7.3, Kernel , GCC ) ROOT(ROOT 3.01/05) One machine has all software required for PHENIX analysis Event generation, reconstruction, analysis Remaining desktops share one library directory with the same kernel and compiler etc… via NFS 2 large RAID disk box with several IDE HDDs (~500G X 2) and several small disks (total ~500G) in 2 desktops Compact but effective system for small user group

Yonsei Computing Resources Linux (RedHat 7.3, Kernel ,GCC ) ROOT(ROOT 3.01/05) Database Reconstruction Calibrations & Run Info Raw Data & DST Big Disk(480G X 2) RAID tools for Linux Analysis Jobs Tag DB Gateway Library(NFS) OBJY P4 2G 480G DISK P3 1G P4 1.3G 480G RAID DISK Firewall 100Mbps PHENIX Library