November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov. 99 - New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
11 INSTALLING WINDOWS XP PROFESSIONAL Chapter 2. Chapter 2: INSTALLING WINDOWS XP PROFESSIONAL2 OVERVIEW  Install Windows XP Professional  Upgrade from.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),
D0 Taking Stock1 By Anil Kumar CD/CSS/DSG July 10, 2006.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
1 Linux in the Computer Center at CERN Zeuthen Thorsten Kleinwort CERN-IT.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
RIKEN CCJ Project Regional computing center in Japan for BNL-RHIC experiment especially for PHENIX collaboration. CCJ serves for RHIC physics activity.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina, L.Lueking,
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
JPS Matsue1 PHENIX Computing Center in Japan (PHENIX CC-J) の採用技術 澤田真也( KEK ) 市原卓、渡邊康(理研、理研 BNL 研究センター) 後藤雄二、竹谷篤、林直樹(理研) 延與秀人、四日市悟(京大)、浜垣秀樹(東大.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
PHENIX Simulation System 1 December 7, 1999 Simulation: Status and Milestones Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
CC-J Monthly Report Shin’ya Sawada (KEK) for CC-J Working Group
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
February 28, 2003Eric Hjort PDSF Status and Overview Eric Hjort, LBNL STAR Collaboration Meeting February 28, 2003.
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
PHENIX Simulation System 1 January 12, 2000 Simulation: Status for VRDC Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
D0 Taking Stock1 By Anil Kumar CD/CSS/DSG June 06, 2005.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
May 10, 2000PHENIX CC-J Updates1 PHENIX CC-J Updates - Preparation For Opening - N.Hayashi / RIKEN May 10, 2000 PHENIX Computing
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
LBNL/NERSC/PDSF Site Report for HEPiX Catania, Italy April 17, 2002 by Cary Whitney
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
September 26, 2003K User's Meeting1 CCJ Usage for Belle Monte Carlo production and analysis –CPU time: 170K hours (Aug.1, 02 ~ Aug.22, 03)
ALICE Computing Data Challenge VI
NL Service Challenge Plans
PC Farms & Central Data Recording
Vanderbilt Tier 2 Project
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
RHIC Computing Facility Processing Systems
Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.
Presentation transcript:

November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing

November 10, 1999PHENIX CC-J Updates in Nov.992 CC-J Development & Construction in Wako Hardware & system software –T. Ichihara- manager, network –Y. Watanabe- HPSS, data duplication facility –N. Hayashi- network, linux, job control –S. Sawada (KEK)- PBS –S. Yokkaichi(Kyoto)- monitor –H. Hamagaki (CNS, Tokyo)- AFS Simulation & offline software (at BNL) –H.I. K. Shigaki (KEK), S. Nishimura (CNS, Tokyo), A. Kiyomichi (Tsukuba) –spin A. Taketani, Y. Mao, YG, H. Sato (Kyoto)

November 10, 1999PHENIX CC-J Updates in Nov.993 New Hardware arrived 3rd & 4th Alta Clusters –RH5.2 preinstalled: not 6.0 –3rd and 4th: Dual Pentium III 600 MHz x 16 nodes –1st and 2nd: Dual Pentium II 450 MHz x 16 nodes –Total … 1360 SPECint95 –Enough capacity of UPS to serve two more boxes 1.6 Tela Bytes RAID disk 2nd Sun E450 as a new NFS server –4 CPU model Located on RIKEN main building 2nd floor

November 10, 1999PHENIX CC-J Updates in Nov.994 CCJSUN and CCJNFS0

November 10, 1999PHENIX CC-J Updates in Nov.995 Sun and Disk 1.6 TB RAID 288 GB RAID ccjsun (gateway) ccjnfs0 (NFS server) UPS for Sun and disk

November 10, 1999PHENIX CC-J Updates in Nov.996 Alta Clusters 1st 2nd 4th 3rd UPS for Alta Cluster

November 10, 1999PHENIX CC-J Updates in Nov.997 Alta Cluster Controller & Gbit AltaCluster Controller Gbit Switch Gbit Hub

November 10, 1999PHENIX CC-J Updates in Nov Tela Disk RAID Level GB x 2 partitions –Solaris 2.6 support up to 1 TB –Solaris 7 (64 bit OS) is too fragile VxFS –Veritas File System –fsck free!

November 10, 1999PHENIX CC-J Updates in Nov /

November 10, 1999PHENIX CC-J Updates in Nov.9910 Schedule Installation (disk reconfiguration etc.) almost done Needs stress test: new-MDC? HPSS stops definitely on Dec. 10, 1999 –Due to moving of HPSS stuff to new building –Alta Cluster can operated without HPSS HPSS restarts on Jan 5, 2000 –could be restarted in Dec. 27, but a week-long new year holidays follows. HPSS final readiness review begins on Feb., 2000 Another moving (Linux PCs) around April?

November 10, 1999PHENIX CC-J Updates in Nov.9911 Schedule (cont’) We are still “pre Test Operation” period now Test Operation starts Feb? Official Open on Spring. April?

November 10, 1999PHENIX CC-J Updates in Nov.9912 New Data Challenge at CC-J? MDC3? VRDC? Our stress test can be combined with a part of PRDF production for VRDC Need PWG inputs –How many events? What kinds? required CPU & Disk? Sharing tasks both RCF (rcas & rcrs) and CC-J? Setup Environment –PISA99 and its PRDF writing routine –Automated scripts, monitoring tools, …., DB? Data transfer PRDF to RCF? Mostly Tape? Required man power?

November 10, 1999PHENIX CC-J Updates in Nov.9913 Memo from Yuji & Shin’ya PISA99 preparation –should be available by the end of this week. –will be installed from the beginning of the next week. –should be installed by the mid next week. Input data/file –should be available by the mid next week. –should be prepared by Charlie and each PWG. –note: Requests from PWGs should be informed to CC-J by the mid next week. Job scripts –are modified from those used for MDC2 by CC-J-WG. Events –will be generated during Nov 22 - Dec 5 (two weeks). –assumptions and rough estimations: –available number of CPUs = 60 –for Au-Au CPU time 1 ksec/event data size 10 MB/event 50k events => 10 days, 500 GB –for p-p CPU time 10 sec/event data size 50 kB/event (after compression) 5M events => 10 days, 250 GB Data transfer –will be by magnetic tapes. (1TB = 50GB/tape x 20tapes) –note: Temporally dosk storage at RCF is needed.

November 10, 1999PHENIX CC-J Updates in Nov.9914 Summary New hardware, Linux PC, Sun, Disk, delivered and installed They are waiting for stress test Possibility to combine this stress test and VRDC (MDC3?) HPSS unavailable Dec.11 to Jan.5