Presentation is loading. Please wait.

Presentation is loading. Please wait.

HPC Business update HP Confidential – CDA Required

Similar presentations


Presentation on theme: "HPC Business update HP Confidential – CDA Required"— Presentation transcript:

1 HPC Business update HP Confidential – CDA Required Madhu Matta VP & GM Service Provider and HPC Business January, 2010

2 The HP HPC Business New & Expanded WW Coverage
Focus on High Performance Computing New & Expanded WW Coverage Dedicated Regional Sales Leaders Message here is we continue to have a strong organizational focus on HPC, at the world-wide level and in the regions. Note that we have added to the team. Introduce the key players, especially new folks: Marc, Glenn The world-wide HPC team is growing! HP Confidential - CDA required HP Confidential 23 April 2017

3 HP leadership in HPC Q1-Q3 YTD HPC Market Share by Vendor, by Revenue (IDC) We’re very pleased that our continued strong focus on HPC is translating into success in the marketplace, as demonstrated in IDC’s tracking of the HPC market. (Note IDC has only reported up to Q2’ Q3 numbers will be out shortly.) In the first half of 2010, HP led the market again, with 32.9% share by revenue, 4% ahead of IBM and almost double Dell’s share. Narrowing that down to just the cluster market, what IDC calls “bright clusters”, HP again leads the market with 27.7% share by revenue. 2% above IBM and 7% above Dell. It’s important to note that after HP, IBM and Dell, all other vendors have shares of under 5%, including the specialty HPC vendors like SGI and Cray. SUN has been declining in share over the years, and with their acquisition by Oracle, are effectively exiting from the HPC market. #1 #1 Source: Q3’10 IDC Technical Server QView, Dec, 2010 HP Confidential - CDA required HP Confidential 23 April 2017

4 ACCELERATE into overdrive with Converged Infrastructure for HPC
Coaching tips: As you walk through the architecture conceptually, show how HP will deliver based on a position of strength, starting with proven technologies customers rely on today. Point out how we will take those innovations forward, integrate and extend their capabilities in a converged infrastructure. We recommend reading the relevant whitepapers and more extensive speaker notes for this slide. Below is a high level summary. ACCELERATE into overdrive with Converged Infrastructure for HPC Purpose-built solutions for scale Lean and green computation and storage ** Spend some time on this slide ** This slide summarized HP’s strategy for HPC. Our focus is on Accelerating our customer’s innovation, we say “Accelerate into overdrive” We do that with our Converged Infrastructure for HPC. HP’s Converged Infrastructure for HPC brings together all of our infrastructure products, including servers, storage, networking, infrastructure software and datacenter power and cooling, into integrated solutions that focus’s all of HP’s added value to create complete HPC solutions, building on HP’s overall Converged Infrastructure stategy. We break that into: Purpose built storage for scale This includes all the servers, storage, networking and infrastructure products needed to create compute and storage solutions. Working with the product teams in ISS, across ESSN, and with our partners, to define the specific requirements from our HPC customers, and work with them to deliver HPC specific solutions, that meet the performance requirements of our customers HPC workloads, from small workgroups to the top of the TOP500, highly optimized for density and power efficiency. Examples: SL390s G7, BL2x220c G7 – with ConnectX-2 LOM for 10GbE/IB, GPGPU integration, with high-density, low power Holistic energy efficiency Energy efficiency has become a critical issue for HPC datacenters. As customers want to scale their systems to meet their increasing performance needs, power becomes an increasing percentage of the OpEx, becoming an impediment to scaling beyond the power limits of their datacenters. We view the solution of that holistically, including design of our systems and managing their power utilization, through design and management of the entire datacenter. Power Capping, Off line UPS, 94% Platinum power supplies, SL Advanced Power Manager, Environmental Edge, POD Modular and adaptable solutions This is our modular approach to building HPC solutions, based on the Unified Cluster Portfolio, that are build-to-order solutions that are fully qualified and tested, and that grow easily as users needs expand. This expands to POD as a modular datacenter, where customers can grow their datacenter footprint by adding PODs, rather than having to build new datacenters that can accommodate multi-years growth. Unified Cluster Platforms, HP Cluster Platforms, HP POD-Works, Factory Express, SL IVEC and Purdue – blade clusters in POD Experience and expertise The years of experience we building HPC solutions, built into our products and build-to-order solutions, and the HPC and application expertise to consult to our customers to best design solutions to meet their needs and deliver. All the way to specialized services for scale-out systems, including HPC clusters. Talk about our world-wide team, and the regional competency centers Holistic energy efficiency Intelligent energy management across systems and facilities; performance & power density Modular and adaptable solutions Grow-on-demand – from entry cluster to POD to cloud; agile, tested solutions for HPC Experience, expertise & Services Designed-in expertise in standard offerings; dedicated design & support services; WW Competency Centers; tailored services HP Confidential - CDA required

5 Purpose-built systems for scale: SL6500
Common modular architecture optimized for performance and efficiency Superior performance & efficiency 8.8x performance density (Rmax/Server) 2.6x power efficiency (Rmax/Watt) 1,440 HP ProLiant SL 390s G7 servers 92% fewer servers Focus of these next few slides is what we launched on Oct 5. With our new SL6500 architecture, we are delivering one architecture that can offer any application at any scale. The SL6500 is a single, high performance scalable platform for HPC. It is comprised of a common modular architecture that can scale from one node to thousands while delivering breakthrough energy efficiency and up to one teraflop per unit of rack space. This is built on a standard platform that can be highly customized and tuned with the HP ProLiant SL390s and SL170s servers to meet varying application demands. The SL6500 with ProLiant 390s servers can accommodate as many as 8 servers or up to 4 servers with 12 GPUs. You heard Satoshi Matsuoka-san talk about Tsubame2, and these are the servers that have enabled that world-class system. To show the impact of the SL390s’ performance and power density, we’ve compared Tsubame2 to the #1 system on the June TOP500 list, Jaguar at Oak Ridge National Labs, a Cray XT5 system with a proprietary design targeting the high-end of HPC systems. Looking at sustained performance per server, Tsubame2 provides 8.8x times more performance density, while providing 2.6x more performance per watt. As compared to prior approaches to building high performance HPC systems, it is this performance and power density in a fully integrated solution that enables SL390s G7 to deliver maximum performance in minimal space and minimal power consumption. Combined with iLO3 management and the SL Advanced Power Manager, which together make managing a large cluster simple and efficient, and single-node serviceability from the front, the SL6500 Scalable System and SL390s G7 server is ideally suited for high performance HPC clusters. _________________________________________________________________________________________________________________________________________ Key attributes: Modularity Flexibility Serviceability Customizable to different demands TECH SPECS SL170: World Class Performance per Watt and Density 16 DDR3 DIMM slots LO100i management 2x1Gb Ethernet 2LFF or 4SFF HDDs 1U Half Width SL390: Balanced Density, High Performance Fabric and Management 12 DDR3 DIMM slots iLO 3 management + 40Gb IB/10Gb Ethernet Jaguar 18,688 servers 1 vs the #1 TOP500 system (Jun’10) Jaguar, Oak Ridge National Labs (USA) - Cray XT5, 18,688 compute nodes, each with 2 Opteron 2435 cpus Rpeak: 2.33PF, Rmax: 1.76PF, KW HP Confidential - CDA required HP Confidential 23 April 2017

6 Purpose-built systems for scale: SL6500
Common modular architecture optimized for performance and efficiency Superior performance & efficiency 10.1x performance efficiency (Rmax/Node) 2.7x power efficiency (Rmax/Watt) 1,440 HP ProLiant SL 390s G7 servers 85% fewer servers Focus of these next few slides is what we launched on Oct 5. With our new SL6500 architecture, we are delivering one architecture that can offer any application at any scale. The SL6500 is a single, high performance scalable platform for HPC. It is comprised of a common modular architecture that can scale from one node to thousands while delivering breakthrough energy efficiency and up to one teraflop per unit of rack space. This is built on a standard platform that can be highly customized and tuned with the HP ProLiant SL390s and SL170s servers to meet varying application demands. The SL6500 with ProLiant 390s servers can accommodate as many as 8 servers or up to 4 servers with 12 GPUs. You heard Satoshi Matsuoka-san talk about Tsubame2, and these are the servers that have enabled that world-class system. To show the impact of the SL390s’ performance and power density, we’ve compared Tsubame2 to the largest conventional cluster system on the June TOP500 list, Pleiades at NASA/AMES, an SGI ICE system using industry standard servers but with features targeting the high-end of HPC systems. Looking at sustained performance per server, Tsubame2 provides 10.1x times more performance density, while providing 2.7x more performance per watt. As compared to prior approaches to building high performance HPC systems, it is this performance and power density in a fully integrated solution that enables SL390s G7 to deliver maximum performance in minimal space and minimal power consumption. Combined with iLO3 management and the SL Advanced Power Manager, which together make managing a large cluster simple and efficient, and single-node serviceability from the front, the SL6500 Scalable System and SL390s G7 server is ideally suited for high performance HPC clusters. _________________________________________________________________________________________________________________________________________ Key attributes: Modularity Flexibility Serviceability Customizable to different demands TECH SPECS SL170: World Class Performance per Watt and Density 16 DDR3 DIMM slots LO100i management 2x1Gb Ethernet 2LFF or 4SFF HDDs 1U Half Width SL390: Balanced Density, High Performance Fabric and Management 12 DDR3 DIMM slots iLO 3 management + 40Gb IB/10Gb Ethernet Pleiades 9,472 servers 1 vs the largest conventional cluster #5 TOP500 (Jun’10) HP Confidential - CDA required Pleiades, NASA/AMES, SGI ICE, 9,472 servers, various Intel Xeon, Rmax 772TF, Rpeak 973TF, Power 3096 HP Confidential 23 April 2017

7 HP Innovation at Tokyo Institute of Technology
Next generation Tsubame 2 - breakthrough performance – less space – lower power DDN Lustre (SFA10000) Goals: 30x performance increase over Tsubame 1.0 Diverse research workload Fit in 200 meters2 and MW power Target PUE of 1.2 “Cloud-like” provisioning for both Microsoft® Windows® and Linux workloads Key to the the Tokyo Tech Tsubame2 project was: Their stringent goals: - 30x performance of Tsubame1 and support a very diverse research workload - But within only 200 square meters, with only 1.8MW of available power and a goal of a PUE of 1.2 Support a dynamic cloud-like environment to easily provision Linux or Microsoft Windows, and provide broad access to the system. 2.4 PetaFLOPS peak, 1.19 PetaFLOPS Linpack Don’t try to explain the design of the cluster, just make the point that it’s based about about 1400 SL390s G7 systems, each with 3 NVIDIA Tesla GPGPUs. The system was designed by HP, NEC and Tokyo Tech, with contributions from all of the listed vendors: HP and NEC w/Tokyo Tech – design HP the SL390s servers NEC prime, for deployment and integration NVIDIA, the GPGPUS and help in s/w tuning Intel for the Xeon cpus Mellanox for the IB adapters in the SL390s Voltaire for the IB switches Microsoft for help bringing up Windows HPC Server, and tuning Really shows our partnering strategy at it’s best, with best of breed technology from each of the vendors 1408 SL390s G7 2u nodes w/3 NVIDIA M2050 GPGPUs each 34 DL580 G7 nodes, w/2 NVIDIA S1070 GPGPUs each HP Confidential - CDA required HP Confidential 23 April 2017

8 #4 on TOP500 and #2 on Green500 MCS G2 s6500 SL390s 8
HP Confidential - CDA required

9 ~50 compute racks + 6 switch racks Two Rooms, Total 160m2
2.4 Petaflops, 1408 nodes ~50 compute racks + 6 switch racks Two Rooms, Total 160m2 1.4MW (rMax, Linpack), 0.48MW (Idle) HP Confidential - CDA required

10 From 2002 to 2010 Peak performance: 40 TFLOPS  Space: 3,000 m2 
one rack of TSUBAME 2.0 30 SL390s servers, with 3 GPUs each Earth Simulator #1 on the Top500 in June 2002 Peak performance: 40 TFLOPS  Space: 3,000 m2  Power: 6 Mega Watts  Efficiency: 6.67 MFLOPS / Watt  Cost: US $450 million  Peak performance: 53 TFLOPS Space: 1.4 m2 – 2,000x smaller Power: ~35KW – 170x less power Efficiency: 1, MFLOPS / Watt – 190x perf/watt Cost: ~ US$1M -- more than 450x lower cost 10 HP Confidential - CDA required

11 HP Confidential - CDA required

12 Holistic energy portfolio
HP POD-Works: The World’s First Assembly Line for Data Centers Data Centers on Demand POD 37% Less Energy 45% Lower Costs PUE of 1.2 POD-Works Complete Data Center in Weeks vs. Years vs. brick & mortar datacenters IVEC, Australia: 4 months: design-deploy Factory Express Georgia Tech “Keeneland”: 7 days: delivery to Linpack Our common modular architecture can help you accelerate deployment of new data centers, moving from years to weeks. Our deployment model is 9 times faster than brick and mortar. You can scale out seamlessly, and in very large numbers. That’s the case with our PODs, produced in our new POD-Works data-center assembly line. These solutions are redefining the data center. We are providing customers with 37% more efficient power and 45% lower costs than a traditional data center, with a PUE as low as 1.2 (and driving PUE further down in the future) Basically, we’re talking about delivering a fully functional Converged Infrastructure in a turnkey solution. This level of extreme scalability isn’t a vision of the future. We’re already doing this for customers. Quickly add incremental DC capacity HP Confidential - CDA required HP Confidential 23 April 2017

13

14 Experience and expertise
Unparalleled design, deployment & support Expert services delivered by expert resources Standardized systems design based on years of successful HPC deployments expertise from WW competency centers Tailored technical services prescriptive to your environment Pay-as-you-grow financing options leasing, asset recovery & tech refresh capacity on demand Datacenter consulting and optimization - Critical Facility Services facility design, programming & cost modeling Our world-wide team of experts have built our years of experience building HPC solutions into our systems, clusters, data centers, etc. To that we add: Services: Regional competency centers provide expert guidance on system design. Fixed/Flexible Care Packs – or Per Data Center HP Financial Services: Leasing, Asset Recovery, Technology Refresh Flexible Pay as you Grow leasing terms Capacity on Demand Critical Facilities Services: Facility & technology assessment services Facility design, programming & cost modeling Commissioning & testing HP Confidential - CDA required HP Confidential 23 April 2017

15 Accelerating Innovation in 2011
NDA HP ProLiant SL390s G7 Even Greater Performance per square foot with up to 8 GPGPUs per server Air Cooled PODs Increased Modularity, Efficiency, Capacity, and Serviceability Just highlight that lot is coming in CY11. Including a new SL390s with up to 8 GPGPUs Including a new air-cooled POD (avoid saying ecopod) And next gen platforms with : Next gen CPUs More memory lanes PCI-e Gen 3 FDR InfiniBand To lead into Scott’s talk Next generation platforms Next gen CPUs More memory lanes PCI-e Gen3 FDR InfiniBand Later 15 HP Confidential - CDA required HP Confidential - CDA required HP Confidential 23 April 2017

16 HP Redefines HPC Success
The Industry’s First Converged Infrastructure for HPC Purpose-built solutions for scale Lean and green computation and storage Holistic energy efficiency Intelligent energy management across systems and facilities; performance & power density HP Converged Infrastructure Modular and adaptable solutions Grow-on-demand – from entry cluster to POD to cloud; agile, tested solutions for HPC Application advantages Application service delivery is streamlined and simplified. Services are delivered more efficiently and securely. Converged Infrastructure A common modular infrastructure powers applications. IT systems are tuned to the demands of each application. Experience and expertise Designed-in expertise in standard offerings; dedicated design and support services HP Confidential 23 April 2017

17 THANK YOU Q & A


Download ppt "HPC Business update HP Confidential – CDA Required"

Similar presentations


Ads by Google