IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.

Slides:



Advertisements
Similar presentations
© University of Reading David Spence 20 April 2014 e-Research: Activities and Needs.
Advertisements

IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Copyright © 2012, Oracle and/or its affiliates. All rights reserved. 1.
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Click Here to Begin. Objectives Purchasing a PC can be a difficult process full of complex questions. This Computer Based Training Module will walk you.
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Network Design and Implementation
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
World’s Leading Provider of Turn-key Compute Solutions for NGS / Bioinformatics.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
1.1 Installing Windows Server 2008 Windows Server 2008 Editions Windows Server 2008 Installation Requirements X64 Installation Considerations Preparing.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 1: The new mainframe.
Sun FIRE Jani Raitavuo Niko Ronkainen. Sun FIRE 15K Most powerful and scalable Up to 106 processors, 576 GB memory and 250 TB online disk storage Fireplane.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Data Center Infrastructure
Microsoft ® SQL Server ® 2008 and SQL Server 2008 R2 Infrastructure Planning and Design Published: February 2009 Updated: January 2012.
and beyond Office of Vice President for Information Technology.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Common Practices for Managing Small HPC Clusters Supercomputing 12
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
Modeling Billion-Node Torus Networks Using Massively Parallel Discrete-Event Simulation Ning Liu, Christopher Carothers 1.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
LSST Cluster Chris Cribbs (NCSA). LSST Cluster Power edge 1855 / 1955 Power Edge 1855 (*LSST1 – LSST 4) –Duel Core Xeon 3.6GHz (*LSST1 2XDuel Core Xeon)
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Tackling I/O Issues 1 David Race 16 March 2010.
Modul ke: Fakultas Program Studi Teknologi Pusat Data 13 FASILKOM Teknik Informatika Infrastruktur Pusat Data.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Power Systems with POWER8 Technical Sales Skills V1
HP MediaSmart Server.
Appro Xtreme-X Supercomputers
VNX Storage Report Project: Sample VNX Report Project ID:
Unity Storage Array Profile
IBM Power Systems.
Presentation transcript:

IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer

Statewide Users Group: Mar 13, Recommendation Summary Expand the existing IBM E1350 cluster with a BioSciences focus: The proposed expansion, while compatible with our existing system, will have features that form a good fit for Biosciences Proposed expansion would propel the system ’ s rank to 4th in U.S. academic supercomputing centers System would be ranked 21st in the overall Top 500 Will provide researchers with a total of 59TF peak performance Increased memory from 8.4TB to 17.5TB –Add 240TB of integrated storage System will be set up for dynamic changes in the operating environment to benefit innovation by the biosciences community The system will also be suited to many other applications and will fit into our current environment including staff knowledge and capacity

Statewide Users Group: Mar 13, Glenn Cluster Upgrade - Upgrade Details The cluster expansion will add approximately 500 nodes. –2 quad-core CPUs per system, 2.3 GHz likely clock speed –16/24 GB memory (24 GB in ~1/3 of the systems.) –20 Gbit/sec IB network –1 Gigabit Ethernet –5 00 GB or larger internal disk drive There will also be approximately 5 larger memory systems with a total of 16 processor cores and 64 GB of memory. Newer versions of hardware for the IB network will deliver 20 Gb/sec throughput to each cluster node, with a communication latency that is less than half compared to the existing network. The newer generation of IB network will integrate with the existing network. This will allow parallel applications to be run across the new and existing nodes.

Statewide Users Group: Mar 13, Glenn Cluster Upgrade - Current Project Plan

Statewide Users Group: Mar 13, Glenn Cluster Upgrade - Milestones