We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byLesley Sayre
Modified over 2 years ago
©2009 HP Confidential template rev. 12.10.091 Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE WORLD
©2009 HP Confidential template rev. 12.10.092©2009 HP Confidential2 TSUBAME 2.0 – THE GOALS –Achieve 30x performance over Tsubame 1.0 –Support diverse research workloads Weather, informatics, protein modeling, etc. –Balanced design for scale Bandwidth to GPUs, CPU and GPU high memory capacity High-bandwidth, low latency, full bi-section network Solid state disks for high bandwidth, scalable storage I/O –Green: stay within power threshold System estimated to require 1.8 MW with PUE of 1.2 –Density (GPU per CPU footprint) for costs and space –Expand support for Tokyo Tech’s 2000 users; provide dynamic “cloud- like” provisioning for both Microsoft® Windows® and Linux workloads
©2009 HP Confidential template rev. 12.10.093©2009 HP Confidential3 HP PROLIANT SL6500 SERVER Performance and Flexibility Mix and match for varying requirements Multiple compute and storage optimized nodes Convenient Serviceability Individually serviceable nodes Cool aisle cabling and node access Hot-swap servers, fans, power supplies Energy Efficiency Shared power and cooling architecture Energy efficient fans and power supplies Common slot power supplies Advanced Power Management Power Capping On/off control Consumption/utilization logging Designed for scalable performance and efficiency
©2009 HP Confidential template rev. 12.10.0944©2009 HP Confidential Designed for a broad set of HPC workloads HP PROLIANT SL390s HPC SERVER NEW
©2009 HP Confidential template rev. 12.10.0955©2009 HP Confidential QPI x16 GPGPU x16 I/O Hub Riser 2 GPGPU x16 Riser 1 Dual IOH for maximum GPGPU bandwidth SL390S 2U SIMPLIFIED ARCHITECTURE 5 DDR3 Intel Xeon 5500/5600 QPI x8 PCI-e x8 Slot 10GbE, QDR IB DDR3 Intel Xeon 5500/5600 I/O Hub QPI Support for NVIDIA Tesla M1060, M2050, M2070, M2070-Q
©2009 HP Confidential template rev. 12.10.096©2009 HP Confidential6 6 SL390s s6500 MCS G2
©2009 HP Confidential template rev. 12.10.0977©2009 HP Confidential HP Modular Cooling System G2 Rack enables maximum power density COMPUTE RACK BUILDING BLOCK FOR TSUBAME2 42U HP Modular Cooling System G2 rack, containing: 30 SL390s G7 2u servers per rack 60 CPUs and 90 GPGPUs 8 chassis with Advanced Power Management 1 HP Network Switch for shared console and admin local area network 2 Airflow Dam 4 Voltaire QDR IB 4036 36-port Leaf Switch Power distribution units Power per rack approximately 35 KW PDU 7
©2009 HP Confidential template rev. 12.10.098©2009 HP Confidential8 PULLING IT ALL TOGETHER
©2009 HP Confidential template rev. 12.10.099 2.4 Petaflops, 1440 nodes ~50 compute racks + 6 switch racks Two Rooms, Total 160m 2 1.4MW (Max, Linpack), 0.48MW (Idle)
©2009 HP Confidential template rev. 12.10.0910 ©2009 HP Confidential The Greenest Production Supercomputer in the World 2 51% Better Performance/Watt 2 *1 Nov 2010 TOP500 Results – compared to Tianhe 1A consisting of 7168 Xeon 5170 servers drawing 4.04MW, 4.7PF peak, HP internally estimated price versus 1442 HP SL390S G7 servers with Xeon 5170 processors drawing 1.39 MW, 2.3 Peak, HP Internally Estimated Price - http://www.top500.org/list/2010/11/100http://www.top500.org/list/2010/11/100 *2 2010 Green 500 Results - http://www.green500.org/lists/2010/11/top/list.phphttp://www.green500.org/lists/2010/11/top/list.php #4 Supercomputer on TOP500 1 142% More TFLOPS/Server 1 21% Better Price/Performance 1 #1 #2 #3#4 TFLOPS / $ HP Most TFLOP/$ Innovation on top of industry standards delivers price and power advantages INDUSTRY LEADING PERFORMANCE/WATT/$$
©2009 HP Confidential template rev. 12.10.0911©2009 HP Confidential11
©2009 HP Confidential template rev. 12.10.0912©2009 HP Confidential12 EXAMPLE APPLICATIONS –3D Protein Rigid Docking (Node 3-D FFT) –ASUCA Weather Forecast [SC10 Best Student Paper Finalist] –Multiscale Simulation of Cardiovascular flows [SC10 Gordon Bell Finalist] –Materials (Dendrite Solidification) –Earthquake Simulation
©2009 HP Confidential template rev. 12.10.0913 “... Tokyo Tech Tsubame2 is by far the most power dense supercomputer ever built, with unprecedented performance and power capacity per rack.” Satoshi Matsuoka Professor, Director of Research Infrastructure GSIC, Tokyo Tech
©2009 HP Confidential template rev. 12.10.0914©2009 HP Confidential14 Q&A http://www.gsic.titech.ac.jp/en http://www.hp.com/go/hpc
HPC Business update HP Confidential – CDA Required
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 4/16/09.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
1 AppliedMicro X-Gene ® ARM Processors Optimized Scale-Out Solutions for Supercomputing.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS Russia’s leading developer of turn-key solutions for supercomputing Privately owned 140+ employees.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
Supermicro © 2009Confidential 06/01/2009 Supermicro GPU Server Solutions SYS-7046T-GRF SYS-6016T-GF-TM2 SYS-6016T-GF-TC2 SYS-6016T-GF SYS-6016T-XF.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Datacenter Power State-of-the-Art Randy H. Katz University of California, Berkeley LoCal 0 th Retreat “Energy permits things to exist; information, to.
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Leading in the compute.
Massive Supercomputing Coping with Heterogeneity of Modern Accelerators Toshio Endo and Satoshi Matsuoka Tokyo Institute of Technology, Japan.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 9/1/09.
Supermicro © 2009Confidential HPC Case Study & References.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Patryk Lasoń, Marek Magryś
Supermicro © 2009Confidential Integrated GPU Systems Optimized for Record-Shattering Performance Presented by Don Clegg, Vice President 10/2/2009.
©2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Chia-Shen Hsu HP ProLiant.
GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage SuperBlade ® Configuration Training Francis Lam Blade.
Sugon Server TC5600-H v3 Moscow, 12/2015.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Managing Scale and Complexity of Next Generation HPC Systems and Clouds Peter ffoulkes Vice President of Marketing April 2011.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
By Harshal Ghule Guided by Mrs. Anita Mahajan G.H.Raisoni Institute Of Engineering And Technology.
Lenovo ® ThinkServer ® RD350 and RD450 Speaker Name, Date.
Accounting for Load Variation in Energy-Efficient Data Centers
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
A 1.7 Petaflops Warm-Water- Cooled System: Operational Experiences and Scientific Results Łukasz Flis, Karol Krawentek, Marek Magryś.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
© 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP ProCurve 2910 Series Switches.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
Revision - 01 Intel Confidential Page 1 Intel HPC Update Norfolk, VA April 2008.
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
Towards energy efficient HPC HP Apollo 8000 at Cyfronet Part I Patryk Lasoń, Marek Magryś.
Monte-Carlo method and Parallel computing An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010.
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
1 May 19th, 2009 Announcement. 2 Drivers for Web Application Delivery Web traffic continues to increase More processing power at data aggregation points.
PVOCL: Power-Aware Dynamic Placement and Migration in Virtualized GPU Environments Palden Lama, Xiaobo Zhou, University of Colorado at Colorado Springs.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
System Center 2012 R2 Overview
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
APE group Many-core platforms and HEP experiments computing XVII SuperB Workshop and Kick-off Meeting Elba, May 29-June 1,
© 2017 SlidePlayer.com Inc. All rights reserved.