STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.

Slides:



Advertisements
Similar presentations
Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

Overview of Midrange Computing Resources at LBNL Gary Jung March 26, 2002.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
- PDSF - Nersc's Production Linux Cluster (26mar02 - MRC LBNL) PDSF NERSC's Production Linux Cluster Craig E. Tull HCG/NERSC/LBNL.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
Computer Science Research and Development Department Computing Sciences Directorate, L B N L 1 Storage Management and Data Mining in High Energy Physics.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 State of the Labs: NERSC Update Juan Meza Lawrence Berkeley National.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
The GRID and the Linux Farm at the RCF CHEP 2003 – San Diego CHEP 2003 – San Diego March 27, 2003 March 27, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind,
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
CDMS Computing Project Don Holmgren Other FNAL project members (all PPD): Project Manager: Dan Bauer Electronics: Mike Crisler Analysis: Erik Ramberg Engineering:
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
US Planck Data Analysis Review 1 Julian BorrillUS Planck Data Analysis Review 9–10 May 2006 Computing Facilities & Capabilities Julian Borrill Computational.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
PDSF and the Alvarez Clusters Presented by Shane Canon, NERSC/PDSF
Seaborg Decommission James M. Craw Computational Systems Group Lead NERSC User Group Meeting September 17, 2007.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
LBNL/NERSC Site Report Cary Whitney NERSC
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
National Energy Research Scientific Computing Center (NERSC) PDSF at NERSC Thomas M. Langley NERSC Center Division, LBNL November 19, 2003.
Production Mode Data-Replication Framework in STAR using the HRM Grid CHEP ’04 Congress Centre Interlaken, Switzerland 27 th September – 1 st October Eric.
LBNL/NERSC/PDSF Site Report for HEPiX Catania, Italy April 17, 2002 by Cary Whitney
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
PDSF Computing model Thomas Davis ASG/NERSC, LBNL LCCWS.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
The HENP Grand Challenge Project and initial use in the RHIC Mock Data Challenge 1 D. Olson DM Workshop SLAC, Oct 1998.
May 23, 2007ALICE DOE Review - Computing1 ALICE-USA Computing Overview of Hard and Soft Computing Resources Needed to Achieve Research Goals 1.Calibration.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
Hall D Computing Facilities Ian Bird 16 March 2001.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
PC Farms & Central Data Recording
Computing Facilities & Capabilities
Jeffrey P. Gardner Pittsburgh Supercomputing Center
HYCOM CONSORTIUM Data and Product Servers
Scientific Computing At Jefferson Lab
Nuclear Physics Data Management Needs Bruce G. Gibbard
Presentation transcript:

STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL

2 Aug 1999D. Olson, STAR Offline at NERSC2 Outline Background of PDSF for STAR ( ( NERSC PDSF history Present configuration of STAR cluster Primary activities at PDSF/NERSC

2 Aug 1999D. Olson, STAR Offline at NERSC3 RHIC Computing Model Offline Computing at RHIC, Kumar et al. (ROCC), Feb Conceived late 1995 BNL onsite - main data center - 50% CPU Remote - modest data - 50% CPU - leverage additional facilities

2 Aug 1999D. Olson, STAR Offline at NERSC4 Implementation of RHIC Computing Model Incorporation of Offsite Facilities T3EHPSS Tape storeSP2 Many universities, etc. Berkeley Japan MIT

2 Aug 1999D. Olson, STAR Offline at NERSC5 NERSC One of the nations most powerful unclassified computing resources. –NERSC is funded by the Department of Energy, Office of Science, and is part of Computing Sciences at Lawrence Berkeley National Laboratory. Production facilities –1 Teraflop (late 1999), 3 Teraflops (late 2000) (Cray & IBM) –600 Terabytes robotic tape storage (HPSS) –Parallel Distributed Systems Facility (PDSF) linux farm (more later…) Large R&D program Adjacent to ESnet headquarters

2 Aug 1999D. Olson, STAR Offline at NERSC6 Esnet Backbone, Late LBNL - ESNet: 612Mbps BNL - ESNet: 45Mbps (current, does 0.6 MB/sec LBL/NERSC RCF) BNL - ESNet: 145Mbps (late 1999, sufficient for planned activities)

2 Aug 1999D. Olson, STAR Offline at NERSC7 NERSC PDSF - Mar Sun Ultra GB

2 Aug 1999D. Olson, STAR Offline at NERSC8 Overview of PDSF for STAR Developed in response to RHIC computing plan Simulations and analysis capability for STAR –at LBNL under control of STAR collaboration –processor farm and disk space coupled to NERSC mass storage –ESnet interface to RCF Goal of PDSF for STAR –Provide scaleable system for simulations and downstream analysis for the STAR collaboration –Long term: ~= STAR share of RCF cpu ~ 6K SPECint95, 40 TB/year robotic tape, 15 TB disk –Buildup now funded by LBNL Nuclear Science Division and NERSC –Expect DOE funding beginning in FY00 via NERSC allocation

2 Aug 1999D. Olson, STAR Offline at NERSC9 History of PDSF hardware Arrived from SSC (RIP), May 1997 October ‘97 –32 SUN Sparc 10, 32 HP 735 –2 SGI data vaults January 1998 –added 12 single-cpu Intel (LBNL/E895) June 1998 –added SUN E450, 8 dual-cpu Intel (LBNL/STAR) October 1998 –added 16 single-cpu Intel (NERSC) –added 500 GB network disk (NERSC) –subtracted SUN, HP –subtracted 160 GB SGI data vaults January 1999 –added 12 single-cpu Intel (NERSC) –added 200 GB disk (LBNL/STAR) July 1999 –added 900 GB disk (LBNL/STAR) September 1999 –adding 18 dual-cpu Intel (LBNL/STAR) –adding 1 TB network RAID disk (LBNL/STAR) Totals (Sept. 1999) –1.2 TB disk on SUN E450 –1.5 TB network disk –1500 SPECint95 –archival storage (via NERSC allocation of 600TB system)

2 Aug 1999D. Olson, STAR Offline at NERSC10 Comparison of RCF to PDSF (Oct. 1999)

2 Aug 1999D. Olson, STAR Offline at NERSC11 PDSF Software STAR off-line environment –Linux, Solaris/Sparc Packages –AFS, CERNlib, DPSS, EGCS, Framereader, GNU, HPSS, itcl, Java, kai, LSF, Objectivity, Orbacus, Orbix, PSDF Login, Perl, pgi, pvm, Root, SSH, SUN Workshop, TCL, TK –>250 software modules for Solaris/Sparc

2 Aug 1999D. Olson, STAR Offline at NERSC12 Current PDSF staff 2.5 FTE NERSC FTE NSD –Craig Tull - group leader –Thomas Davis (1) - PDSF project lead –Cary Whitney (1), John Milford (1/2) –Dieter Best (1/4 NSD), Qun Li (1/4 NSD)

2 Aug 1999D. Olson, STAR Offline at NERSC13 Supercomputer time for Gstar production FY99 –Used 250K hours of Cray T3E at PSC + NERSC –Amortized over 4 months: ~ 1000 SPECint95 FY2000 –Have requested 300K hours at NERSC on T3E and new IBM SP2 system.

2 Aug 1999D. Olson, STAR Offline at NERSC14 STAR Activities at PDSF/NERSC Interactive s/w development on PDSF Individual data analysis (ROOT, Gstar, …) –now: 58 STAR users, 27 from LBL Gstar production (T3E, SP2 systems) Detector response simulation BFC reconstruction of simulated data –full simulation, and embedding studies Mirror subset of real data for analysis Planned for FY2000

2 Aug 1999D. Olson, STAR Offline at NERSC15 Summary Simulations/Analysis capability at LBNL for STAR is growing –Oct. 99: 49% cpu, 85% disk of RCF Currently 58 STAR users, 31 from outside LBNL PDSF receiving strong support –($, people) from LBNL Nuclear Science Division and NERSC In FY00, focus on: –Simulation - detector response, reconstruction, embedding –Analysis of real data Learn more about PDSF & become a user!