Presentation is loading. Please wait.

Presentation is loading. Please wait.

System G And CHECS Cal Ribbens

Similar presentations


Presentation on theme: "System G And CHECS Cal Ribbens"— Presentation transcript:

1 System G And CHECS Cal Ribbens
Center for High-End Computing Systems (CHECS) Department of Computer Science Virginia Tech

2 Introducing System G System G (Green) was sponsored in part by the National Science Foundation and VT CoE (CHECS) to address the gap in scale between research and production machines. The purpose of System G is to provide a research platform for the development of high-performance software tools and applications with extreme efficiency at scale. A primary goal was to demonstrate that supercomputers can be both fast and a more environmentally green technology. System G is the largest power-aware research system and one of the largest computer science systems research clusters in the world.

3 System G Stats 325 Mac Pro nodes, each with two 4-core 2.8 gigahertz (GHZ) Intel Xeon Processors (2592 cores) Each node has eight gigabytes (GB) RAM and 6 MB cache. Mellanox 40Gbs (QDR) InfiniBand interconntect. LINPACK result: 22.8 TFLOPS Over 10,000 power and thermal sensors Variable power modes: DVFS control (2.4 and 2.8 GHZ), Fan-Speed control, Concurrency throttling,etc. Intelligent Power Distribution Unit: Dominion PX (remotely control the servers and network devices. Also monitor current, voltage, power, and temperature through Raritan’s KVM switches and secure Console Servers.)

4 Center for High-End Computing Systems
Srinidhi Varadarajan Director

5 Vision Our goal is to build a world class research group focused on high-end systems research. This involves research in architectures, networks, power optimization, operating systems, compilers and programming models, algorithms, scheduling and reliability. Our faculty hiring in systems is targeted to cover the breadth of these research areas. The center is involved in research and development work, including design and prototyping of systems and development of production quality systems software. The goal is to design and build the software infrastructure that makes HPC systems usable by the broad computational science and engineering community. Provide support to high performance computing users on-campus. This involves the center in supporting actual applications, are then profiled to gauge the performance impact of its research.

6 Research Labs Computing Systems Research Lab (CSRL)
Distributed Systems and Storage Lab (DSSL) Laboratory for Advanced Scientific Computing and Applications (LASCA) Scalable Performance Laboratory (SCAPE) Systems, Networking and Renaissance Grokking Lab (SyNeRGY) Computational Science Laboratory Software Innovations Lab

7 Faculty/Students Godmar Back (04) Barbar Ryder (08) Ali Butt (06)
Adrian Sandu (03) Kirk Cameron (05) Eli Tilevich (06) Wu Feng (05) Srinidhi Varadarajan (99) Dennis Kafura (82) Layne Watson (78) Cal Ribbens (87) Danfeng Yao (10) Ph.D. Students MS Students 35 20+

8 Deployment Details * 13 racks total, 24 nodes on each rack and 8 nodes on each layer. * 5 PDUs per rack. Raritan PDU Model DPCS Each single PUD in SystemG has an unique IP address and Users can use IPMI to access and retrieve information from the PDUS and also control them such as remotely shuting down and restarting machines, recording system AC power, etc. * There are two types of switch: 1) Ethernet Switch: 1 Gb/sec Ethernet switch. 36 nodes share one Ethernet switch. 2) InfiniBand switch: 40 Gb/sec InfiniBand switch. 24 nodes (which is one rack) share one IB switch.

9 More Information About SystemG
A. Question: Why not purchase regular blades for SystemG? 1. Current Mac Proc machine has same system configurations with much cheaper price. 2. Every node has two extra PCI-Experts X16 Slots. We already use one for InfiniBand. 3. Better heat dissipation for thermal research. B. InfiniBand vs Ethernet: 1. Users can always run MPI programs using regular Ethernet. 2. Compared to Ethernet, we have much faster infiniband (40x faster) but require proper usage of infiniband APIs and other related programming techniques. 3. For MPI programs, there is : MVAPICH (for infiniband) ---- MPICH MVAPICH2 (for infiniband)--- MPICH2 Users can compile MPI programs using MVAPICH implementations to compile and run parallel programs on Infiniband if necessary.

10 Useful Links and Contact Info System G reservation Page (Wiki): System G administrator System G listener: MVAPICH and MVAPICH2:

11 A Power Profile for HPCC benchmark suite


Download ppt "System G And CHECS Cal Ribbens"

Similar presentations


Ads by Google