Presentation is loading. Please wait.

Presentation is loading. Please wait.

On the road to petascale processing with IU’s Big Red Supercomputer and IBM BladeCenter H Gregory P. Rodgers, IBM Craig A. Stewart, Indiana University.

Similar presentations


Presentation on theme: "On the road to petascale processing with IU’s Big Red Supercomputer and IBM BladeCenter H Gregory P. Rodgers, IBM Craig A. Stewart, Indiana University."— Presentation transcript:

1 On the road to petascale processing with IU’s Big Red Supercomputer and IBM BladeCenter H Gregory P. Rodgers, IBM Craig A. Stewart, Indiana University November 12, 2006

2 Big Red and IBM BladeCenter H License Terms Rodgers, G.P. and C.A. Stewart. On the road to petascale processing with IU’s Big Red Supercomputer and IBM BladeCenter H. 2006. Presentation. Presented at: SC|06 (Tampa, FL, 12 Nov 2006). Available from: http://hdl.handle.net/2022/14749 Portions of this document that originated from sources outside IU are shown here and used by permission or under licenses indicated within this document. Items indicated with a © are under copyright and used here with permission. Such items may not be reused without permission from the holder of copyright except where license terms noted on a slide permit reuse. Except where otherwise noted, the contents of this presentation are copyright 2006 by the Trustees of Indiana University. This content is released under the Creative Commons Attribution 3.0 Unported license (http://creativecommons.org/licenses/by/3.0/). This license includes the following terms: You are free to share – to copy, distribute and transmit the work and to remix – to adapt the work under the following conditions: attribution – you must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). For any reuse or distribution, you must make clear to others the license terms of this work. 2

3 Big Red and IBM BladeCenter H 3 Outline A snapshot of where we are with real applications (Stewart) Challenges and technologies – the IBM Perspective (Greg Rodgers) The programming challenge A 2% solution… or at least 2% of the way to a solution: Big Red Users wanted – sign up to use Big Red

4 Big Red and IBM BladeCenter H 4 Path to PetaFLOPS - peak

5 Big Red and IBM BladeCenter H 5 Path to PetaFLOPS - acheived?

6 Big Red and IBM BladeCenter H 6 HPC Development Node performance Node capability Node complexity Node cost Cluster Scalability Rack-mounted nodes Blue Gene Linux Blade Clusters HPC Development HPC Development: Increase node performance AND increase overall scalability with scalable systems and scalable applications. This slide © IBM and may not be reused without permission

7 Big Red and IBM BladeCenter H 7 Multi-paradigm Systems Given frequency scaling limitations, parallelism is generally accepted way to reach new performance levels Need to leverage parallelism at multiple levels 1.Node level parallelism -> Advanced fabrics and Linux scaling 2.Thread level parallelism -> Multi-core processors 3.Instruction Level parallelism-> SSE3 128-bit vectors 4.And the possibility of multiple interconnects IBM: Do this in a standards framework: e.g. Linux, Open MPI, OpenMP, HPF, UPC This slide © IBM and may not be reused without permission

8 Big Red and IBM BladeCenter H 8 Building Clusters with Blades Building large clusters of blades with IBM BladeCenter is a valuable scalable HPC cyber-infrastructure for research and private industry customers of IBM! –Reusable infrastructure –Simplified node –Power efficiency –Out-of-band systems management JS21 blade one of the most dense computing solution today –BG=2.1GF/core –P575=6.2GF/core –Itanium=5.1GF/core –Pentium=4.8GF/core –Opteron=3.3GF/core –970=5.8GF/core –970MP=7.4GF/core –Additional capability with VMX (vector media extensions SIMD engine) BladeCenter H Cluster ecosystem evolving with various fast IPC fabric options –GigE, Multiple 802.3ad GigE, Myrinet 2000, IB4x and 10G This slide © IBM and may not be reused without permission

9 Big Red and IBM BladeCenter H 9 Indiana’s “Big Red” JS21 Cluster Project Design/Configuration discussions started in February 2006 Ordered via RFP by Indiana University in April 2006 Built and benchmarked at CSC – Rochester, MN (6 weeks) Delivered in June as Infrastructure Solution to IU Bloomington #23 on Top500 Summer 2006 list, #31 Nov 2006 list. Fastest supercomputer at an American University –Sustains 15TF out of theoretical peak of 20TF … Indy500 weekend –Linpack on 504 Linux JS21 nodes A very data rich diskless cluster! With Big Red, IU has created a base "cyberinfrastructure" that integrates some of the nation's most powerful grid resources, thus enabling scientists to explore new ways of doing computational science. They have users getting good results using this system connected to TeraGrid in a variety of ways, ranging from coupled supercomputers to complex workflows, and at a scale that would not be possible with stand-alone data centers.

10 Big Red and IBM BladeCenter H 10 The Big Red Cluster 1350 Diskless Linux Cluster with DDN S2A9500 based GPFS Cluster Components –512 JS21 compute nodes + (6 User nodes, 8 spare nodes) 526 total 2048 compute CPUs (2048* 2.5Ghz * 4 = 20.48TF Peak) 4.096TB DDR2 533Mhz RAM (2GB/processor) –38 BladeCenter H chassis –2 Myrinet 2000 256+256 spine switches –10 p505 for distributed systems management – 1 p505 MGMT/1 spare –1 Petabyte GPFS storage subsystem 18 Rack Summary –10 Compute racks, each with 4 BC-H and 1 p505 image server –1 Myrinet rack, with two CLOS256 switches –1 Management rack, with user nodes and Force10 networking. –5 Racks DDN S2A9500 disk –1 Rack GPFS NSD servers (p505)

11 Big Red and IBM BladeCenter H 11 This slide © IBM and may not be reused without permission

12 Big Red and IBM BladeCenter H 12 BC-H Rear View Today fiber cables Short blue cables Green cables To service network To Myrinet 2000 switch To boot and root network To Force10 user network bonded Gb Global network Yellow cable This slide © IBM and may not be reused without permission

13 Big Red and IBM BladeCenter H 13 BCH Chassis and Rack View BC04 Image Server BC03 BC02 BC01 4 of 12 BC-H Modules Used 4-port Gb switch for global Gb network 4-port Gb switch for distributed boot and root 1 OPM for Myrinet 2000 1 Management module for service network External Cables from rack 5 copper 10/100/1000 to service network hub (10.3) 5 copper /10/100/1000 to install network hub (10.4) 17 Gb ethernets to global gigabit network (10.2) 56 Myrinet 2000 fiber (10.1) Orange fiber cables Yellow cables Short blue cables Green cables This slide © IBM and may not be reused without permission

14 Big Red and IBM BladeCenter H 14 Distributed Image Management Created for MareNostrum diskless environment and fast image management DCHP server for JS21 network boot plus distributed network root file system 56 JS21 blades in one rack share a p505 Image Server Fast hierarchical synchronous or asynchronous image update Coming next year to IBM alphaworks Hierarchical model scalable to thousands of Linux nodes Image Server with disks bootnet This slide © IBM and may not be reused without permission

15 Big Red and IBM BladeCenter H 15 Petascale challenges Expose complexity to application programmers so that it can be exploited Hide complexity from most application scientists Deal with data

16 Big Red and IBM BladeCenter H 16 Selective management of complexity Synthetic Programming Environment –Chris Mueller, Andrew Lumsdaine, Open Systems Lab, Pervasive Technologies Labs at IU Open MPI –libraries assembled at compile time sensitive to system characteristics –Open MPI – capable of supporting multiple interconnects in multiple ways –Fault Tolerance Data integrity Absorb loss of member of MPI-World

17 Big Red and IBM BladeCenter H 17

18 Big Red and IBM BladeCenter H 18 Big Red in Context From June – October 2006 the largest academic supercomputer in the US. Originally 23 rd, now 31 st on Top500 list Most important point: Big Red is part of the TeraGrid, meaning that with a TeraGrid allocation you can use Big Red and the associated equipment Software stack includes typical Linux HPC applications and libraries as well as Globus based grid software, So it's taking an existing community of users that are very accustomed to the client-server way of accessing supercomputers and saying, what are some Grid resources that we could offer that would be interesting to those kind of users? That's where IU came up with it’s initial set of services across TeraGrid that allows the user to have an allocation that can be used on any of the machines.

19 Big Red and IBM BladeCenter H 19 IU’s Cyberinfrastructure

20 Big Red and IBM BladeCenter H 20 Software Absoft compilers Vampir MILC, BLAST (Altivec-optimized versions coming) NAMD, WRF, CPMD, Quantum Espresso, GeoFest, Jaguar, Pacer3D, Gromacs “We do requests”

21 Big Red and IBM BladeCenter H 21

22 Big Red and IBM BladeCenter H 22 The TeraGrid wants you on the path to a PetaFLOPS! Why a TeraGrid account? –You get up to 30,000 hours in computing time, for less than 30 minutes of your time. –Also get by default 1 TB of storage space on IU’s HPSS system Pre-requisite(s): –You need a CV in either ps, htm/html, doc or pdf format. It can be very short. –You need a short proposal How to apply (easy version) –Go to http://kb.iu.edu/ –Enter the following in search window: teragrid start –Follow directions

23 Big Red and IBM BladeCenter H 23 Once you have applied… … and gotten a DAC, start using Big Red! For help or requests, send email to rc@iu.edu rc@iu.edu And if you like using Big Red, then apply for larger allocations through MRAC and LRAC processes

24 Big Red and IBM BladeCenter H 24 Acknowledgements Lilly Endowment, Inc. –Pervasive Technology Labs. Create new inventions, devices, and software that extend the capabilities of information technology in advanced research –INGENBuilding on the genomics revolution to improve health and the economy of Indiana –METACyt Complements INGEN while expanding biological scoope IBM Life Sciences Institutes of Innovation –Partnership between IBM and IU to develop computer programs for 3D cell modeling using genomic, proteomic, and cell physiological data. The work described here has been funded by several federal and state grants, including the following: –NSF Grants No. 0116050, 0338618, 0504075, 0451237, 0521433 to Indiana University –The integration of IU’s cyberinfrastructure with Purdue’s and the TeraGrid has been supported in part by grants from the 21st Century Fund of the State of Indiana. –The contents of this report are solely the responsibility of the authors and do not necessarily represent the official views of the National Science Foundation, or other funding agencies.

25 Big Red and IBM BladeCenter H 25 Questions? And thanks for your attention…


Download ppt "On the road to petascale processing with IU’s Big Red Supercomputer and IBM BladeCenter H Gregory P. Rodgers, IBM Craig A. Stewart, Indiana University."

Similar presentations


Ads by Google