SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Tamnun Hardware. Tamnun Cluster inventory – system Login node (Intel 2 E GB ) – user login – PBS – compilations, – YP master Admin.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 1 Introduction Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Cluster Components Compute Server Disk Storage Image Server.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Enterprise Storage Our Journey Thus Far John D. Halamka MD CIO, Harvard Medical School and Beth Israel Deaconess Medical Center.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Module 9 Review Questions 1. The ability for a system to continue when a hardware failure occurs is A. Failure tolerance B. Hardware tolerance C. Fault.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Monte Carlo Data Production and Analysis at Bologna LHCb Bologna.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
1 Cluster Development at Fermilab Don Holmgren All-Hands Meeting Jefferson Lab June 1-2, 2005.
1 SOS7: “Machines Already Operational” NSF’s Terascale Computing System SOS-7 March 4-6, 2003 Mike Levine, PSC.
Computational Research in the Battelle Center for Mathmatical medicine.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Status of Grid & RPC-Tests Stand DAQ(PU) Sumit Saluja Programmer EHEP Group Deptt. of Physics Panjab University Chandigarh.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Computer System Replacement at KEK K. Murakami KEK/CRC.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
11 October 2000Iain A Bertram - Lancaster University1 Lancaster Computing Facility zStatus yVendor for Facility Chosen: Workstations UK yPurchase Contract.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Compute and Storage For the Farm at Jlab
NIIF HPC services for research and education
Mattias Wadenstein Hepix 2012 Fall Meeting , Beijing
Luca dell’Agnello INFN-CNAF
Stallo: First impressions
Southwest Tier 2.
JDAT Production Hardware
Tamnun Hardware.
Presentation transcript:

SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line tape storage 1+ GB/s sustained I/O delivered to clients

SUMS Disk: SGI IS x 1TB drives in two racks (960 drives max) 480 TB usable space 2.5 GB/s sustained I/O Parity check on read Same write speed as read $900K

SUMS Tape Library: Spectra Logic T LTO4 drives 800 GB per tape 120 MB/s per drive 2200 slots in 2 frames Expandable to 8 frames, 10K slots $650K including media and spare parts

SUMS Server: SGI Altix x 1.67GHz IA64 core 96 GB memory 6 x 1Gb/s ethernet 2 x 10Gb/s ethernet 12 x 4Gb/s Fibrechannel 4 x 20Gb/s Infiniband $140K

JSOC Processor Farm 64 cluster nodes, 512 CPU cores, 2 TB memory Currently: 18 nodes, 110 cores, 292 GB memory NFS-RDMA over Infiniband for SUMS access User login on head node only Batch job only Will add interactive analysis machines later

JSOC Compute node: SGI Gallup-2 Intel “Harpertown” 45nm quad-core CPU 8 cores, 32 GB mem, 90 GFLOP/s per node (lws2: 16 cores, 64 GB, 96 GFLOP/s) Two nodes per 1U (1.75”) rack space 16 nodes now, 48 later $160K (16 nodes)

JSOC Hardware Status SUMS and pipeline hardware ready to buy Database servers soon Interactive analysis machines later Machine room power ready; may need 3 rd cooling unit for full 64-node cluster