Presentation is loading. Please wait.

Presentation is loading. Please wait.

Step up, Scale out with NeXtScale System Delivering Insight Faster….

Similar presentations


Presentation on theme: "Step up, Scale out with NeXtScale System Delivering Insight Faster…."— Presentation transcript:

1 Step up, Scale out with NeXtScale System Delivering Insight Faster….
System x WW Marketing, Hyper Scale Computing Solutions − October 2014 2014 LENOVO INTERNAL. All rights reserved.

2 Agenda NeXtScale Overview Introducing IBM NeXtScale System M5
NeXtScale Family Client Benefits Introducing IBM NeXtScale System M5 M5 Enhancements Target Market Segments Messaging: Scale, Flexible, Simple NeXtScale with Water Cool Technology Timeline Solutions “IBM delivered server hardware of exceptional performance and provided superior support, allowing us to rapidly integrate the system into our open standards based research infrastructure. Complementing the technical excellence of NeXtScale System, IBM has a long track record in creating high-performance computing solutions that gives us confidence in its capabilities...” —Paul R Brenner, Associate Director, HPC The University of Notre Dame, Indiana “In my 20 years of working with supercomputers, I’ve never had so few failures out of the box. The NeXtScale nodes were solid from the first moment we turned them on.. —Patricia Kovatch, Associate Dean, Scientific Computing, Icahn School of Medicine at Mount Sinai This slide shows the agenda for today’s discussion. We will cover a brief overview of NeXtScale basics and client benefits. Then we will introduce the new NeXtScale M5 product and what new features are important for our target market segments. Then I will tell you about the NeXtScale messaging will give you examples of how we are providing the stated benefits to our customers. We will then take a deeper dive into NeXtScale with Water Cool Technology, followed by the product timeline, and a little bit about solutions. “Hartree Centre needed a powerful, flexible server system that could drive research in energy efficiency as well as economic impact for its clients. By extending its IBM System x platform with IBM NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable energy use and help its clients gain a competitive advantage..” —Prof. Adrian Wander, Director of the Scientific Computing Department, Hartree Centre 2

3 High Performance Computing
Introducing IBM NeXtScale System M5 Modular, high-performance system for scale-out computing Standard Rack Chassis Primary Workloads High Performance Computing Compute Low Cost Chassis provides only power and cooling Dense High Performance Server Dense Storage Tray (8 X 3.5” HDDs) Dense PCI Tray (2 x 300W GPU/Phi) Standard 19” Racks Top of Rack switching and choice of fabric Open Standards Based tool kit for deployment and management Storage* NeXtScale is an x86 offering that debuted last year, introducing a new category of dense computing, designed for high performance, density, and scale out for HPC and Cloud environments. With rapidly changing market and demands, our clients continue to look for new things from their IT hardware. NeXtScale was designed with the innovation and flexibility to provide the optimal solution to customers for a variety of use cases and workloads. The concept is pretty simple but the result is very powerful – a 6U chassis designed to hold 12 half-wide bays. These bays are 1U tall like a traditional server but only take up half the rack width allowing us to pack in 2 times the number of servers per U space than for a 1U. This design not only supports dense, full performance compute servers, but it also supports something called Native Expansion, which allows you to add storage or accelerators (like GPU’s or Xeon Phi) to the solution. This enables you to deliver a very simple and cost effective base server that can be expanded to create very rich and dense storage or acceleration solutions. And all this power of NeXtScale has been designed to fit into a standard rack. This architecture is expanding our market into some of the fastest growing x86 segments. Our first generation of NeXtScale has carried forward many of the great benefits of iDataPlex and extended it beyond HPC with more flexibility, more performance, greater density, optimized networking, and installation in standard racks. Now as we move to the next generation of NeXtScale, we are building on this very innovative design to expand to even more users with a larger array of features and benefits. Acceleration * M5 support to be available with 12Gb version at Refresh 1

4 Deliver Insight Faster
Efficient Reliable Secure NeXtScale System provides the scale, flexibility and simplicity to help clients solve problems faster Scale Flexible Simple Our clients expect us to provide solutions that are Efficient, Reliable, and Secure. These are key attributes that are true across the entire System x M5 product family. In addition, customers need to solve problems faster than ever before. NeXtScale makes this possible by providing a system with Scale, that is Flexible, and is Simple to manage. With Scale, you can create a solution at any size, large or small, and achieve great economies of scale and efficiency, and impact per dollar. With Flexibility, you can build the product nearly any way you want it tailored to client needs, from the choice of compute, storage or accelerators---to the choice of hot swap or simple swap drives---to the choice of choice of IO on PCI or mezzanine, and now even the choice of air or water cool. And you get all this scalability and flexibility while keeping everything very Simple, with a single architecture, optimized for many use cases, that is easy to manage and grow. Smart delivery of scale yields better economics and greater impact per $$ Significant CAPEX and OPEX savings while conserving energy Create a system tailored to precisely meet your need now Provides the ability to adapt rapidly to new needs and new technology Drive out complexity with a single architecture Rapid provisioning, easy to manage, seamless growth

5 Introducing IBM NeXtScale System M5
2014 LENOVO INTERNAL. All rights reserved.

6 New Compute Node fits into existing NeXtScale infrastructure One Architecture Optimized for Many Use Cases Chassis NeXtScale n1200 Enclosure Air or Water Cool Technology Storage NeX node* nx360 M5 + Storage NeX PCI Nex node (GPU / Phi) nx360 M5 + PCI NeX New Compute node IBM NeXtScale nx360 M5 The NeXtScale family includes of a variety of functional components that fall under a single architecture but provide the ultimate in flexibility and performance. The NeXtScale family consists of the 6U, 12-bay chassis that supports a choice of Compute node, Storage NeX node, or PCI NeX node (for GPUs or Xeon Phi)—in a mix and match combination if desired. Native Expansion means that we can add function and capabilities seamlessly to the basic node. No need for exotic connectors or unique components or high speed back or mid planes. NeXtScales Native Expansion capability adds HDDs to the node with a simple storage NeX (tray) + standard RAID card, SAS cable, and HDDs. For adding GPUs to a node its just as simple – start with a PCI NeX add a 2U PCI riser and a passively cooled GPU from nVidia or Intel and you have a powerful acceleration solution for HPC, virtual desktop, or remote graphics. With the new M5 version, we are introducing a new compute node, called the IBM NeXtScale nx360 M5. This compute node is a new half-wide, dual-socket server based on Grantley architecture from Intel and also includes several more highly valued features. The beauty of the NeXtScale design is that users can get the benefits of the M5 (Grantley) platform simply by ordering the new compute node, while still leveraging the existing chassis and NeX nodes. The existing chassis and Native Expansion nodes that were built for the M4 server will continue to work with the new M5 compute node. This allows investment protection for existing users, and future product stability for new users. Dense Compute Top Performance Energy Efficient Air or Water Cool Technology Investment Protection Add RAID card + cable Dense 32TB in 1U Up to 8 x 3.5” HDDS Simple direct connect Mix and Match Add PCI riser + GPUs 2 x 300W GPU in 1U Full x16 Gen3 connect Mix and Match *M5 support to be available with 12Gb version at Refresh 1

7 Faster compute performance1
NeXtScale System M5 Enhancements Incorporates Broad Customer Requirements What’s New 50% more cores and up to 39% faster compute performance* with Intel Xeon E v3 processors (up to 18 core) Double the memory capacity with 16 DIMM slots (2133MHz DDR4 up to 32GB) Double the storage capacity with 4x 2.5” drives Hot swap HDD option New RAID slot in rear provides greater PCI flexibility x16 Gen3 ML2 slot supports InfiniBand / Ethernet adapters for increased configuration flexibility at lower price (increase from x8) Choice of air or water cool Investment protection – chassis supports M4 and M5 Key Market Segments HPC, Technical Computing, Grid, Cloud, Analytics, Managed Service Providers, Scale-out datacenters Direct and Business Partner enabled solutions 39% Faster compute performance1 50% More Cores2 2X Memory Capacity3 14% Faster Memory4 All New Hot Swap HDD5 2X Hard Drives5 Here are the new key features and benefits for NeXtScale M5, which were driven by customer requirements from our target market segments. With Intel’s new Grantley architecture, we are achieving 39% faster compute performance and 50% more cores with the E v3 processors. We’re doubling the memory capacity (up to 512GB per node) due to increased # of DIMM slots, and we’re supporting faster DDR4 memory. We’re also adding an option for 2 front access Hot Swap drives, doubling the internal hard drive capacity. PCI and IO flexibility is being increased through the addition of a front ML2 mezzanine and the addition of a dedicated PCI slot for RAID. And we now have the choice of air and water cool versions too Footnotes: Preliminary Intel benchmarks on Haswell nx360 M5 includes Intel Xeon E5-26xx v3 (up to 18C) versus current Intel Xeon E5-26xx v2 (up to 12C) nx350 M5 includes 16 DIMM slots versus 8 DIMM slots in current platform nx360 M5 includes DDR4 memory DIMMs running at 2133MHz versus 1866Mhz in current platform nx360 M5 includes option for 2x hot swap 2.5” HDD in front, which brings total HDD capacity to 4 2.5” HDD per node. nx360 M5 includes x16 Gen3 ML2 mezz vs. x8 mezz in current platform. nx360 M5 includes 3 PCI slots vs. 2 in current platform Full Gen3 x16 ML26 50% More PCI Slots7 Choice of Air or Water Cool 7

8 Target Segments - Key Requirements
Data Center Infrastructure Cloud Computing Key Requirements: Mid-high bin EP processors Lots of memory (>256GB/node) for virtualization 1Gb / 10Gb Ethernet 1-2 SS drives for boot Key Requirements: Low-bin processors (low cost) Smaller memory (low cost) 1Gb Ethernet 2 Hot Swap drives (reliability) High Performance Computing Key Requirements: High bin EP processors for maximum performance High performing memory InfiniBand 4 HDD capacity GPU support Here are our target markets for NeXtScale. NeXtScale’s design is centrally focused on High Performance Computing, which requires the highest performing processors, memory, and I/O to achieve maximum performance per dollar. However, many of NeXtScale’s features provide key benefits to other segments as well, such as Cloud Computing, Data Center Infrastructure, Data Analytics, and Virtual Desktop. For instance, Cloud Computing and Virtual Desktop demand significantly more memory capacity for virtualization capability. On the other hand, data centers need less processing power and memory to save on costs, but require hot swap drives for improved reliability. Data Analytics demand both strong processing power and memory capacity. With the M5 version, NeXtScale is adding more features that meet these varied requirements making it even more important for a broader range of segments and workloads. We’ll return to this again later. Data Analytics Virtual Desktop Key Requirements: Mid-high bin EP processors Lots of memory (>256GB per node) 1Gb / 10Gb Ethernet 1-2 SS drives for boot Key Requirements: Lots of memory (> 256GB per node) for virtualization GPU support

9 NeXtScale M5 addresses segment requirements
Data Center Infrastructure Cloud Computing NeXtScale M5 provides: Intel EP (mid- to high bin) Up to 36 cores / node Up to 512 GB memory / node Ethernet (1 /10 Gb), PCIe, ML2 Broad range of 3.5”, 2.5” HDDs 2 front hot-swap drives NeXtScale M5 provides: Intel EP (low bin) Up to 36 cores / node Low cost 4 / 8 GB memory Onboard Gb Ethernet std. 2 front hot-swap drives (2.5”) Integrated RAID slot High Performance Computing NeXtScale M5 provides: Intel EP (high bin) Up to 36 cores per node Fast memory (2.1 GHz), 16 slots FDR ML2 InfiniBand, future EDR Broad range of 3.5’ HDDs 4 int. 2.5’ HDD, 2 hot swap Up to 2 x GPUs per 1u Let’s return to our Target Segments again, and we can now see how the new NeXtScale M5 features align with the requirements from each of these segments. We’re addressing the price/performance needs of HPC with higher core counts and processor performance, as well as staying time to market with InfiniBand and GPU support. The increased memory capacity benefits Cloud Computing, Data Analytics, and Virtual Desktop. And the front hot-swap drives and low bin processor support addresses the reliability and cost concerns of the Data Centers. NeXtScale M5 now provides the optimal set of features that more effectively meet the requirements of High Performance Computing, while extending to adjacent markets as well. Data Analytics Virtual Desktop NeXtScale M5 provides: Intel EP (mid- to high bin) Up to 36 cores / node Up to 512 GB memory / node Ethernet (1 /10 Gb), PCIe, ML2 Broad range of 3.5”, 2.5” HDDs NeXtScale M5 provides: Choice of Processors Up to 512 GB memory Up to 2 x GPUs per 1U

10 IBM NeXtScale nx360 M5 – The Compute Node
nx360 M5 Server IBM NeXtScale nx360 M5 Server ½ Wide 1U, 2 socket server Intel E v3 processors (up to 18C) 16x DIMM slots (DDR4, 2133MHz) 2 Front Hot-Swap HDD option (or std PCI slot) 4 internal HDD capacity New, embedded RAID PCI slot ML2 mezzanine for x16 FDR and Ethernet Native expansion (NeX) support – storage and GPU/Phi RAID Slot Drive bay(s) x16 PCIe 3.0 slot 16x DIMMs Dual-port ML2 x16 mezzanine card (IB/Ethernet) Simple architecture E v3 CPU System infrastructure Now let’s look closer at the new compute note. The NeXtScale nx360 M5 server is a half-wide, dual socket server node designed for data centers that require high performance but are constrained by floor space. By taking up less physical space in the data center, the NeXtScale server significantly enhances density. The M5 version supports Intel Xeon E v3 processors (up to 145W and 18-core) thus providing more performance per server. It also includes new feature enhancements, including 16 DIMM slots, front hot swap drives, dedicated RAID slot, and a x16 ML2 mezzanine for Infiniband or Ethernet. While very powerful, the nx360 M5 compute node contains only essential components in the base architecture to provide a cost-optimized platform. x24 PCIe 3.0 slot KVM connector 1 GbE ports Optional Hot Swap HDD or PCIe adapter Power, LEDs Supported on same chassis as M4 version 10

11 IBM NeXtScale nx360 M5 Server
4x 2.5” drives supported per node 2 Hot-Swap SFF HDDs or SSDs Choice of: Hot Swap HDD Option or Std full-high, half-length PCIe 3.0 Slot The NeXtScale nx360 M5 compute node provides a choice of 2 front hot-swap 2.5” hard drives or PCIe slot. This provides the flexibility for additional easy access storage or PCI capability, whichever you prefer. Also available from the front panel of the node are optional dual port x16 ML2 slots, KVM connector, dual GbE ports, Power, LEDs, and even a nifty pull out tab at the bottom of the node for labeling or asset tagging. PCI slot Option Dual-port x16 ML2 mezzanine card InfiniBand / Ethernet KVM Connector 1 GbE ports Dedicated or shared mgm’t Power, LEDs Labeling tag for system naming, asset tagging 11

12 Investment Protection - Chassis supports M4 or M5 Nodes
n1200 Enclosure IBM NeXtScale n1200 Enclosure Bay 11 Bay 12 6U Chassis, 12 bays ½ wide component support Up to 6x 900W or 1300W power supplies N+N or N+1 configurations Up to 10 hot swap fans Fan and Power Controller Mix and match compute, storage, or GPU nodes No built in networking No chassis management required Mix and match M4 and M5 air cool nodes¹ Bay 9 Bay 10 Bay 7 Bay 8 Bay 5 Bay 6 Bay 3 Bay 4 Bay 1 Bay 2 Front View Rear View Optimized shared infrastructure System infrastructure The chassis is not new for M5. The existing n1200 chassis (from M4 generation) will continue to be compatible with the nx360 M5 server, with nothing more needed than a firmware update for the fan and power controller. As a quick review, the NeXtScale n1200 enclosure is an efficient, 6U 12 bay chassis with no built-in networking or switching capabilities, and requiring no chassis level management. It is designed to provide shared high-efficiency power and cooling. With choice of six 900W or 1300W power supplies, three on each side, it is optimal for three-phased power. There are 10 hot swap fans to keep the system running cool. There is also a fan and a power controller in the back of the chassis so you can manage power and fan speeds of the systems. The n1200 chassis is designed to scale client’s business needs, with flexibility to add computing, storage or acceleration capability—which is as simple as adding specific nodes to the chassis. Because each node is independent and self-sufficient, there is no contention for resources among nodes within the enclosure. Also this chassis is very dense. While a typical rack holds only 42 1U systems, this chassis doubles the density up to 84 compute nodes within the same footprint. Fan and Power Controller 3x power supplies 3x power supplies 5x 80mm fans 5x 80mm fans 12

13 NeXtScale - Choice of Air or Water Cooling
IBM NeXtScale System Air Cool Water Cool Technology Your Choice Innovative direct water cooling No internal fans Extremely energy efficient Extremely quiet Lower power Dense, small footprint Lower operational cost and TCO Ideal for geographies with high electricity costs or space constraints Air cooled, internal fans Fits in any datacenter Maximum flexibility Broadest choice of configurable options supported Supports Native Expansion nodes Storage NeX PCI NeX (GPU, Phi) One of the key differences with the new M5 version is that we have the choice of Air or Water cooling. The air cooled compute node has been updated from the M4 version with the enhancements previously discussed and is a good choice for configuration flexibility and broad choice of options and Native Expansion nodes. We are also announcing a new NeXtScale version based on innovative Water Cool Technology. This is direct water cooling, direct to the server, with no internal fans. This solution is extremely efficient, quiet, low power, and dense. It is ideal for “green” environments and driving lower operational costs and Total Cost of Ownership Both of these solutions are based on the same core design and are very complementary, able to be included within the same cluster, depending on customer’s cooling requirements. We’ll discuss each of these further in the next few slides.

14 IBM NeXtScale System with Water Cool Technology (WCT)
Water Cool Node & Chassis nx360 M5 WCT Compute Tray (2 nodes) CPU with liquid cooled heatsink Full wide, 2-node compute tray 6U Chassis, 6 bays (12 nodes/chassis) Manifolds deliver water directly to nodes Water circulated through cooling tubes for component level cooling Intel E v3 CPUs 16x DDR4 DIMM slots InfiniBand FDR support (ML2 or PCIe) 6x 900W or 1300W PSU No fans except PSUs Drip sensor / Error LEDs Cooling tubes x16 DIMMs Simple architecture 1 GbE ports 1 GbE ports Power, LEDs Dual-port ML2 (IB/Ethernet) Labeling tag PCI slot for Connect IB System infrastructure n1200 WCT Enclosure n1200 WCT Manifold IBM NeXtScale System with Water Cool Technology, a new addition to the System x family, uses an innovative direct water-cooling design to more efficiently cool system components such as processors, memory and I/O cards. Instead of using fans, water is delivered directly to the server, and circulated throughout the system through cooling tubes, supporting water inlet temperatures up to 45 degrees Celsius. Based on the same core design as the air cool version, it includes 3 new components---a new compute tray, a new enclosure, and a manifold. The nx360 M5 WCT compute tray is a full-wide, 1U tray that holds 2 compute nodes. Based on the same planar technology as air cool, these nodes also support the E v3 processors, 16x slots of DDR4 memory, and Infiniband FDR support. However, rather than using fans, water is circulated through cooling tubes within the server to cool the processor, memory DIMMs, IO, and other heat producing components. The n1200 WCT Enclosure is a 6U chassis containing 6 full-wide bays, each holding one of the full-wide compute trays. This allows 12 compute servers per 6U chassis, same as we have for air cool. It also still includes the 6 power supplies, but no fans are required at all, making for a quieter and lower power environment. The WCT Manifold is what delivers the water to each of the nodes from the CDU. Each manifold section attaches to a chassis and connects directly to the water inlet and outlet connectors for each compute node in order to safely and reliably deliver water to an from each server. While other vendors are starting to introduce Water Cooled options into the market, IBM was the first to provide this type of technology starting with iDataPlex Direct Water Cooling. And we are now are further improving this technology for NeXtScale. 6 full wide bays 12 compute nodes 14

15 NeXtScale – Key Messages
FLEXIBLE SIMPLE Even a small cluster can change the outcome Start at any size and grow as you want Efficient at any scale with choice of air or water cooling Maximum impact/$ Optimized stacks for performance, acceleration, and cloud computing Single Architecture with Native Expansion Built on Open Standards Optimized for your data center today and tomorrow Channel and box ship capable One part number unlocks IBM’s service and support Flexible Storage and Energy Management The back is now the front—simplify management and deployment Get in production faster with Intelligent Cluster Optimized shared infrastructure without compromising performance “Essentials only” design These are the 3 key messages for NeXtScale and were also the 3 guiding design principles for the product to help customer solve problems faster, reduce time and cost, and gain competitive advantage. These are Scale, Flexible, and Simple. The NeXtScale system scales extremely well. It allows you to start at any size you want and provides outstanding performance and efficiency at whatever size you choose. As you scale, you get results faster, insights quicker, and maximum impact per dollar. NeXtScale is also very Flexible. It provides many choices on how to build and configure solutions tailored to customer needs. For example, you have a choice of compute, storage, and acceleration all within the same dense chassis. With the M5 version, you can now choose simple swap or hot-swap drives; PCI or ML2 for IO; or air or water for cooling. You also have the choice of how you want to receive your IT, whether in boxes or fully configured. And NeXtScale is Simple. Everything you need to access is in the front, so management and deployment is very easy. With Intelligent Cluster, you can simplify deployment by getting into production faster with less required resources. And NeXtScale has an “essentials only” design with shared infrastructure that saves unnecessary cost without compromising performance.

16 NeXtScale – Key Messages
Even a small cluster can change the outcome Start at any size and grow as you want Efficient at any scale with choice of air or water cooling Maximum impact/$ Optimized stacks for performance, acceleration, and cloud computing

17 Scale: The Power of Scale Delivers Benefits at Any Size
Even a small cluster can change the outcome Make better decisions by running larger, more sophisticated models Spot Trends Faster and more effectively by reducing total time to results Manage Risk Better by increasing accuracy and visibility of models and datasets Workstation Node(s) Let’s talk about the power of Scale. Scale out computing refers to increasing performance by adding more systems or resources. It enables organizations to start small and scale their systems as needed. Even a small cluster can make a significant difference in achieving faster results compared to attempting same work on a single workstation. Speeding time to results is only half of this story. As you can see from this example, what used to take days or weeks on workstations now runs in hours or minutes thanks to scale - unlocking real time information that was not previously available. This allows us to make better decisions, spot trends faster and manage risk better. Scaling out to larger clusters makes even greater achievements possible, and NeXtScale does this extremely well. Data points from a study by IBM called Highly Productive, Scalable Actuarial Modeling: Delivering High Performance for Insurance Computations Using Milliman’s MG-ALFA® . It looked at migration from independent islands of compute to smart shared system/cluster records took 14 days on one single core work station. Compared to 14 hours on a 14 node BladeCenter based cluster. 1 million records that took 7.5 days on 600 single core work stations could now run on only 150 BladeCenter nodes installed in 3 racks. Game Changing Results Life Insurance Actuarial workbook 1700 records that took 14 hours on a single workstation now takes 2.5 minutes on small cluster 1 million records that took 7.5 days on 600 workstations now takes 2 hours on a 3 rack cluster with only 150 nodes

18 Departmental Solutions Containers – ‘NeXtPods’
Scale: Start at Any Size. Grow in any Increment. Single nodes and Chassis Growing node by node? Available direct from IBM Optimized for availability through our partners Install the chassis today, grow into it tomorrow Configured racks or chassis Want to speed how quickly you can grow? Shipped fully assembled Client driven, Choice Optimized. “Starter Packs” are Appliance easy. CTO flexible Complete Clusters and Containers Growing by leaps and bounds? NeXtScale can arrive ready to power on - ‘personality’ applied Racks at a time or complete infrastructure ready containers The key point here is that you can acquire the power of NeXtScale how ever you want. Do you prefer to receive it in single nodes and empty chassis? We are set up for that. Given the chassis is so light and relatively so low cost we believe we will see many clients install the chassis into the rack to ready them for future growth – simplifying and speeding additions to their IT infrastructure. Alternatively, we can ship fully assembled racks or chassis with your configuration in place. We can also help you get started with our intuitive and flexible starter kits. These starter kits are very easy and can be purchased quickly, along with the kind of flexibility you expect from a configurable solution. Now for clients that want to add IT at the rack level or even at a container level – IBM NeXtScale is also set up for that. Racks can be fully configured, cabled, labeled, and programmed before we ship. With the help of IBM GTS we can take configured racks and assemble them into complete containerized solutions with power, cooling, and infrastructure delivered ready to go. So Scale is for everyone. Any size -- from a handful of nodes or a complete mini data center. Your choice. Chassis or Departmental Solutions Rack(s) Containers – ‘NeXtPods’ Single Nodes

19 40% 10% 85% Scale: Achieve extreme scale with ultimate efficiency
NeXtScale System with Water Cool Technology 40% More Energy Efficient Data Center1 Requires no auxiliary cooling³ No chillers required due to warm water cooling (up to 45 C)3 10% More Power Efficient Server2 No Fans required for compute elements Small power supply fans only Lower operational costs, and quieter NeXtScale System with Water Cool Technology drives extreme efficiency into the datacenter, making for a greener environment. Water Cool Technology enables up to 40% greater energy efficiency in the data center, and 10% greater power efficiency at the server level compared to air-cool solutions. This is largely due to the absence of fans for the compute nodes, which not only saves power but also makes for a much quieter environment. Over 85% of the heat in the system is recovered by water cooling and can then be re-used for other purposes such as heating facilities and buildings. There is also a significant advantage in that water inlet temperatures up to 45 degrees Celsius are supported. This makes expensive water chillers unnecessary, saving significant capital expense. The operational cost savings associated with Water Cool Technology result in quick payback of initial investment and continued savings for lower Total Cost of Ownership. This is particularly essential in geographies with high electricity costs or high cost of floor space Going beyond the efficiency benefits, Water Cool Technology improves performance as well. It enables processors to run continuously in Turbo mode, allowing them to run at higher frequencies.. 85% Heat Recovery by Water Re-use warm water to heat other facilities Run processors at higher frequencies (Turbo mode) Based on comparisons between air-cooled IBM iDataPlex M4 server and water-cooled iDataPlex M4 servers LRZ, a client in Germany DC numbers Geography dependent Water Cool Technology 19

20 Scale: Maximum impact per $. Per ft2. Per rack.
Race Car Design – performance and cost point ahead of features / functions Maximize the capability of your data center floor with dense and essential IT Top bin E v3 processors Fast memory running at 2133Mhz Choice of SATA, SAS, or SSD on board Open ecosystem of high speed IO interconnects More cores per floor tile Easy front access serviceability Choice of rack infrastructure Light weight + high performance can reduce floor loading Processing 40% One nx360 with SSDs delivers same IO perf as 2X Less weight per system5 50% 50% 355 Hard Disks3 More FLOPs per cycle than a Xeon E v27 More High Frequency Cores1 Intel Xeon E-5 26xx v2 versus current Intel Xeon %-5 26xx v1 Memory DIMMs running at 1866Mhz versus 1600Mhz in current platforms nx360 M4 includes 4 x 1.8” SSDs run which run faster than traditional 2.5” SAS HDDs. SSDs deliver peak performance of 40,000 IOPS versus HDDs at peak of 450 IOPS. 4 x 50,000 = 200,000 IOPS which is equivalent to ” HDDs Based on A 1000 node cluster with 2 x86 sockets, 8 core 2.7 GHz, consumes about 340 KW. No hardware changes – optimization of power useage at rest and at peak utilization. Europe (0.15€ per KWh) = 441K€ per year US (0.10$ per KWh) = US$ 295K per year Asia (0.20$ per KWh) = US$ 590K per year As compared to a a traditional 1U server like HP DL360 Gen 8 Weight (maximum config) kg (39lbs) nominal configuration 13Kgs for comparison. Versus NeXtScale nx360 M4 (6 kg) + apportioned chassis (39kg/12 or 3.25kg) = 9.25kg maximum v 13= 40% more weight using 1Us versus NeXtScale nx36 M4for similar processing power. Based on the comparison between an equivalently configured NeXtScale system with Intel Xeon E v3 over the June 2014 #99 entry on the Top500 ( which uses Intel Xeon E v2. With AVX2 capabilities in the Xeon E v3. Ticks per cycle goes to 16 With AVX2 turned on, a system with a E has a Rpeak value of 1.1 TFLOP Designing a similar configured solution as the #99 entry on the June Top500 list ( less cores of Intel Xeon E v3 are required to achieve Teraflops of computational capabilities over the E V2. Comparing the E v2 and the E v3, the v3 will achieve 2X as many flops per 2 socket server with the increase in cores from 12 in the v2 to 14 in the v3 as well as the doubling of flopes per cycle from 8 in the v2 to 16 in the v3 Less Servers required 9 CUSTOMER BENEFITS Memory Runs FASTER2 Power Savigns with Platform LSF Energy Aware4 1.1 TFLOP 2.7X 14% 80% LESS racks per solution6 Performance achieved per server 15% Increase in Flops/Watt10 20 20

21 Platform Symphony Family
Scale: Platform Computing – complete, powerful, fully-supported Applications Simulation / Modeling Big Data / Hadoop Analytics Social & Mobile Platform LSF Family Batch, MPI workloads with process mgmt, monitoring, analytics, portal, license mgt Platform HPC Simplified, integrated HPC mgmt for batch, MPI workloads integrated with systems Platform Symphony Family High throughput, near ‘real time’ parallel compute and Big Data / MapReduce workloads Workload and Resource Management Data Management Elastic Storage based on General Parallel File System (GPFS) High performance software defined storage There are four major product families in the IBM Platform computing portfolio. The first is Platform LSF which is our historical flagship product that has a large install base. It is a scalable, comprehensive workload management suite for mission-critical heterogeneous environments. I just want to emphasize this concept of a suite because it isn’t just one product and this is a big differentiator in the marketplace. It actually is a comprehensive offering of everything that you need for the workload management as well as things like monitoring and analytics. It really has an unmatched experience with a dominant market share in major industries. It has a powerful multi-policy scheduling engine that I talked about before. Then it’s also used in mission critical accounts. It’s part of their everyday mainstream work processes and it’s known in the industry for that reliability. That’s a key differentiator for us. Second, it is platform HPC which is a simplified, integrated, HPC management software offering that’s typically bundled with systems. Where Platform LSF has typically been targeted into larger accounts, Platform HPC is really ready there for a cluster or maybe a departmental offering that’s just getting started. You can then scale up if you need to and upgrade into the Platform LSF family because it’s based on the same LSF engine and software but Platform HPC really is an all-in-one integrated solution with a market-leading, very easy to use web interface. It can be used in even the smallest of clusters. Third is the Platform Symphony Family which is really more an analytics infrastructure and it’s a high throughput, low latency compute and data intensive analytics suite of offerings. In all our offerings we actually have gone through the process of making them into IBM additions. At the advanced position for Platform Symphony, it also includes our Big Data offering around Map Reduce but Platform Symphony really does have a leading experience with over 50% of the major investment banks using us as customers. What that does is it shows the scalability and reliability and it’s transferrable to many other industries. Another great example of that is how the Big Data problem is represented in areas such as government intelligence. But it has a high scalability and higher performance than other solutions due to this fast low latency processing capability. It’s used as a grid infrastructure in many of our customers in many at the enterprise level as well as global. You can use it in multiple geographies. Finally, there’s the Platform Cluster Manager family which is all about the provisioning and management of clusters. It scales from everything from our standard edition which is all about simplifying the administration and management of a multiple cluster type of environment to the advanced edition which really supports the concepts of a much more dynamic infrastructure all the way to HPC clouds where people can create clusters on demand through an easy to use self-service portal as well as supporting the concept of multi-tenant HPC clouds so we have customers that are more service providers and they provide unique cluster environments to each of their end customers or groups. LSF – runs and manages batch workloads of varied complexity Provisions and manages Platform LSF, Platform Symphony and third party workload managers including Hadoop clusters Symphony Service-oriented, API driven workloads Extreme throughput / low latency Agile resource sharing, multi-tenancy Big Data, Hadoop requirements Platform Cluster Manager – Builds and manages clusters and HPC Clouds Provisions and manages Platform LSF, Platform Symphony and third party workload managers including Hadoop clusters Platform Cluster Manager Family Provision and manage Single Cluster (Standard) to Dynamic Clouds (Advanced) Infrastructure Management Compute Storage Network Virtual, Physical, Desktop, Server, Cloud Heterogeneous Resources

22 Scale: Performance Optimized Stack – From Hardware Up
Applications Simulation / Modeling Analytics Risk Analysis Workload and Resource Mgmt Platform LSF Platform HPC Adaptive Computing Moab Maui/Torque Global/Parallel Filesystem GPFS Lustre NFS Application Libraries Intel® Cluster Studio OpenMPI MVAPICH2 Platform MPI Operating Systems There are four major product families in the IBM Platform computing portfolio. The first is Platform LSF which is our historical flagship product that has a large install base. It is a scalable, comprehensive workload management suite for mission- critical heterogeneous environments. I just want to emphasize this concept of a suite because it isn’t just one product and this is a big differentiator in the marketplace. It actually is a comprehensive offering of everything that you need for the workload management as well as things like monitoring and analytics. It really has an unmatched experience with a dominant market share in major industries. It has a powerful multi-policy scheduling engine that I talked about before. Then it’s also used in mission critical accounts. It’s part of their everyday mainstream work processes and it’s known in the industry for that reliability. That’s a key differentiator for us. Second, it is platform HPC which is a simplified, integrated, HPC management software offering that’s typically bundled with systems. Where Platform LSF has typically been targeted into larger accounts, Platform HPC is really ready there for a cluster or maybe a departmental offering that’s just getting started. You can then scale up if you need to and upgrade into the Platform LSF family because it’s based on the same LSF engine and software but Platform HPC really is an all-in-one integrated solution with a market-leading, very easy to use web interface. It can be used in even the smallest of clusters. Third is the Platform Symphony Family which is really more an analytics infrastructure and it’s a high throughput, low latency compute and data intensive analytics suite of offerings. In all our offerings we actually have gone through the process of making them into IBM additions. At the advanced position for Platform Symphony, it also includes our Big Data offering around Map Reduce but Platform Symphony really does have a leading experience with over 50% of the major investment banks using us as customers. What that does is it shows the scalability and reliability and it’s transferrable to many other industries. Another great example of that is how the Big Data problem is represented in areas such as government intelligence. But it has a high scalability and higher performance than other solutions due to this fast low latency processing capability. It’s used as a grid infrastructure in many of our customers in many at the enterprise level as well as global. You can use it in multiple geographies. Finally, there’s the Platform Cluster Manager family which is all about the provisioning and management of clusters. It scales from everything from our standard edition which is all about simplifying the administration and management of a multiple cluster type of environment to the advanced edition which really supports the concepts of a much more dynamic infrastructure all the way to HPC clouds where people can create clusters on demand through an easy to use self-service portal as well as supporting the concept of multi-tenant HPC clouds so we have customers that are more service providers and they provide unique cluster environments to each of their end customers or groups. LSF – runs and manages batch workloads of varied complexity Provisions and manages Platform LSF, Platform Symphony and third party workload managers including Hadoop clusters Symphony Service-oriented, API driven workloads Extreme throughput / low latency Agile resource sharing, multi-tenancy Big Data, Hadoop requirements Platform Cluster Manager – Builds and manages clusters and HPC Clouds Provisions and manages Platform LSF, Platform Symphony and third party workload managers including Hadoop clusters RHEL SuSE Windows Ubuntu Bare Metal Management/ Provisioning/Monitoring Extreme Cluster Administration Toollkit (xCAT) Bare metal management/provisioning Compute Storage Network Virtual, Physical, Desktop, Server, Cloud

23 Scale: GPGPU Accelerator Optimized Stack – From Hardware Up
Applications Life Sciences Oil and Gas Finance Molecular Dynamics Workload and Resource Mgmt Platform LSF Platform HPC Adaptive Computing Moab Maui/Torque Global/Parallel Filesystem GPFS Lustre NFS Application Libraries Intel® Cluster Studio CUDA OpenCL OpenGL Operating Systems RHEL SuSE Windows Bare Metal Mgmt and Provisioning Extreme Cluster Administration Toollkit (xCAT) Bare metal management/provisioning Compute Storage Network Virtual, Physical, Desktop, Server, Cloud

24 Scale: Cloud Compute Optimized Stack – From Hardware Up
Application Public Cloud Providers Private Cloud MSP/CSP OpenStack CE For customers looking to deploy complete Open Source Solutions with little to no Enterprise features IBM Cloud Manager with OpenStack . Optimized with automation, security, resource sharing and monitoring over OpenStack CE SmartCloud Orchestrator Customers who require optimized utilization, multi-tenacy and enahanced security Cloud Management Solutions Common Cloud Management Platform Provides Server, Storage and Network Integration, access to OpenStack APIs Common Cloud Stack KVM, VMWare, Xen, Hyper-V Hypervisors The cloud stack is much like the HPC stack – its founded on solid hardware and open standards. Choice is a key for us when it comes to cloud – no one solution is right so we are trying to allow client choice. NeXtScale’s goal is to blend into client environments and best practices without any changes. No changes to data center and no changes to management, protocols, and tools. Bare Metal Mgmt and Onboarding Puppet, xCAT, Chef, SmartCloud Provisioning Compute Storage Network Virtual, Physical, Desktop, Server, Cloud

25 NeXtScale – Key Messages
FLEXIBLE Single Architecture with Native Expansion Built on Open Standards Optimized for your data center today and tomorrow Channel and box ship capable One part number unlocks IBM’s service and support Flexible Storage and Energy Management

26 GPU Riser Card + GPU/Phi
Flexible: Native eXpansion – Adding Value, not Complexity Base node delivers robust and dense raw compute capabilities NeXtScale’s Native Expansion allows seamless upgrades of the base to add common functionalities All on a single architecture, with no need for exotic connectors or unique components Storage Graphics Acceleration / Co-processing nx360 M5 nx360 M5 + + Storage NeX* PCI NeX + + Native Expansion means that we can add function and capabilities seamlessly to the basic node. No need for exotic connectors or unique components or high speed back or mid planes. To add storage, simply start with the base compute node, and then add a Storage NeX tray with standard RAID card, SAS cable and hard drives. For acceleration, start with the base compute node, then add on a PCI NeX tray with 2U PCI riser and a passively cooled GPU from NVIDIA or Xeon Phi from Intel and you have a powerful acceleration solution for HPC, virtual desktop, or remote graphics. RAID Card + SAS Cable + HDDs GPU Riser Card + GPU/Phi IBM NeXtScale nx360 M5 with Accelerator NeX IBM NeXtScale nx360 M5 with Storage NeX * M5 support to be available with 12Gb version at Refresh 1

27 Flexible: Designed on Open Standards = Seamless Adoption
IBM ToolsCenter Consolidated, integrated suite of management tools Powerful bootable media creator, FW updating OpenStack Ready Deploy OpenStack with Chef or Puppet Mirantis Fuel, SuSE Cloud, IBM SmartCloud UEFi and iMM Standards-based hardware that combines diagnostic and remote control; No embedded SW Richer management experience and future-ready IPMI 2.0 Compliant Use any IPMI compliant mgt. software – Puppet, Avocent, IBM Director, iAMT, xCAT, etc. OpenIPMI, ipmitool, ipmiutils, FreeIPMI compatible System Monitoring Friendly with open source tools like Ganglia, Nagios, Zenoss, etc. Use with any RHEL/SuSE (and clones) or Windows based tools. xCAT Provides remote & unattended methods to assist with Deploying, Updating, Configuring, and Diagnosing NeXtScale is also very flexible with support of Open Standards. A client needs more than hardware to use IT. We have designed NeXtScale to support an open stack of industry standard tools to allow clients that have existing protocols and tools to migrate easily to using NeXtScale. We do include things like our Integrated Management Module with its unique value delivery but we also support standard IPMI for use in a mixed environment. Whether you are running an HPC application that is optimized with Platform Computing or run on top of an industry standard open source tool NeXtScale makes it easy. Same for Cloud – you can use OpenStack and other industry standard, open stacks, or IBM Smart Cloud – both are viable and bring value. Platform Computing Workloads managed seamlessly with Platform LSF Deploy clusters easily with Platform HPC SDN Friendly Networking direct to system; no integrated proprietary switching Support for 1/10/40Gb, InfiniBand, FCoE, and VFAs

28 Flexible: Optimized with your DataCenter in Mind – today and tomorrow
The Challenge: Package more into the data center without breaking the clients’ standards Lower Power Costs all day long – peak usage times and slow times Maximize energy efficiency in data center The Solution: NeXtScale + IBM Innovation Essentials only design allows more servers to fit into the data center Designed to consume less power and to lower energy costs at peak and at idle Smart power management can drive down power needs when systems are at idle Choice of air and water cooled servers in either IBM racks or existing racks 40% energy efficiency for water cooled solutions Reduce power cost during slow times Choice air or water cooled racks Choice air or water cooled racks Choice air or water cooled racks Lower energy usage during the peak Lower energy usage during the peak Lower energy usage during the peak Lower energy usage during the peak Our Rack OR yours 2X MORE Servers per floor tile NeXtScale is easy to install in existing data centers. It also is designed to maximize green field data centers. Power consumption at peak loading is reduced by several methods: (1) we have selected the most energy efficient parts (voltage regulation, energy star PSU, 1.35V memory options). (2) We then design NScale with only the essentials so there are no extra parts sitting in the system consuming power. (3) We enable and allow clients to control some very power sleep state setting in the node that can trim processor power consumption by 40% (need to verify this with Luigi) when not at peak performance. (4) We go beyond the node by allowing chassis level power capping for powerful energy management. Outside the hardware we offer energy aware scheduling software as part of Platform Computing LSF – this software allows jobs to be ‘tagged’ with energy profiles. The scheduler can then select how and where the job is run to result in the lowest possible power consumption. No matter what you are trying to do: pack more performance into an aging data center, reduce power costs when the systems are not being highly utilized, or to reduce total overall energy usage – NScale was designed to deliver more impact per KW. 40% More Energy Efficient with Water Cool 40% More Energy Efficient with Water Cool 40% More Energy Efficient with Water Cool 40% More Energy Efficient with Water Cool Lowest Operational costs with Water Cool Lowest Operational costs with Water Cool

29 Faster Time from arrival to production readiness
Flexible: How Do You Want Your IT to Arrive? NeXtScale can ship fully configured, ready to power on Fully racked and cabled Labeling with user supplied naming Pre programmed IMMs and addresses Burn in testing before shipment at no added cost Prefer to receive systems in boxes – no problem $ Customer Benefits Another area of flexibility is how you want your IT to arrive. With NeXtScale, there are many choices available. Some clients want everything in piece parts so they can mix and match to build what they want. Others want systems that they have tested and approved to show up all configured to their liking. Still others would like for complete solutions of racks to arrive ready to plug in. With NeXtScale, the choice is truly yours. Shipping servers in boxes is not to complex so lets talk about the value of shipping full solutions. When IBM builds a solution for our clients we don’t just put things in a rack, we cable it, labeling each cable and each node. We program the IMMs (BMC) and can even provide a “burn in test”. Before we ship the rack, we not only fully assemble it but we run a test called Linpack. This test stresses every component in the solution for about 2 hours. This early burn in testing helps us find any parts that are not installed properly or not functioning up to standard. We can provide this test report to the client as requested to show the results of individual racks to make sure they are all performing as expected. Most failures of commodity components happen early in life – our aim is to find these at IBM and resolve them before our clients have to deal with them. So do you want to cut the time from arrival to production by 75%? We’ll have it arrive ready to go. Want to treat your complex solution as if it were one machine? We can do that because your service and support all comes from 1 special part number that is on the rack. How much does this cost? Either no cost or very little cost over box shipments – how you ask? Think of a single rack’s worth of IT – it all comes in boxes, with plastic, styrofoam, and papers. We save all the cost of these items, and put it towards assembly and testing. IBM Intelligent Cluster Number of part numbers needed for the entire solution support – no matter the brand of component SAVE 105 lbs of cardboard1 54.6 ft3 of styrofoam 288 linear feet of wood 21,730 less paper inserts 75% 1 Faster Time from arrival to production readiness 1Per Rack 29

30 Flexible: Confidence it is high quality and functioning upon arrival
Comprehensive list of interoperability-proven components for building out solutions IBM servers IBM switching, 3rd party switching IBM storage, 3rd party storage IBM software, 3rd party software Countless cables, cards, and add ins Best recipe approach yields confident interoperability Each rack is built from over 9000 individual parts Manufacturing LINPACK test provides lengthy burn-in on all parts in the solution Confidence the parts are installed and functioning properly Any failing parts are replaced prior to shipment Reduces early life part fall out for our clients Consistent performance and quality are confirmed before shipment Is this one rack OR Is it >9,000 parts? IBM is not the only company that provides ingredients or parts that our clients want in these solutions today – think about your most recent purchase – there was Intel parts, Cisco or Mellanox parts? Some ones storage, and the list goes on and one. Through the Intelligent Cluster we assemble a large selection of IBM and non IBM parts that we know work together properly because we have tested them. We then deliver solutions built with these approved parts as one part number with a single warranty that covers everything in the solution. No matter the function or brand – it all carries the same support. List out the total number of functioning components in a typical rack – procs, dimms, hdds, cards, cables, boards, etc… Server boards – dozens of independent components PCI cards/Mezz – 144 Procs - 144 DIMMs – 576 HDDs – 144 Cables – 360 Fans - 60 PSUs – 18 External switches and PSUs PDUs 100s of server level components Approx 9000 parts that are stressed When you grow rack by rack you want assurances that what you add to your infrastructure is running right. That the new stuff is inline with what your other racks are doing. Our testing can showcase the predictability of the performance the rack yields and that is valuable. Predictability can be a beautiful thing. It’s both

31 Flexible: Global Services & Support
IBM is a recognized leader in services & support Speed + Quality 57 call centers worldwide with regional and localized language support 23,000 IT support specialists worldwide who know technology 585 parts centers with 13 million IBM and non-IBM parts Prevent downtime with proactive, first-rate service Resolve outages faster if they do occur to protect your brand Optimize IT and end-user productivity—and revenue—to enhance business results Protect your brand reputation and keep your customer base Simply support to save time, resources, and cost 94% first call hardware success rate A combined total of 6.8M hardware and software service requests 114 hardware and software development laboratories Rated #1 in Technical Support Parts are delivered within 4 hours for 99% of US customers 75% of software calls resolved by first point of contact Now, let’s take a closer look at IBM’s global client support infrastructure for maintenance and technical support services. No matter where you need help, we can be there with the services you need to help keep your IT infrastructure available and security-rich. There is strength in numbers. IBM has nearly 400,000 IBM employees worldwide (197,000 of those are IBM Global Services employees.) We have a support presence in 209 countries/nations, speaking ~127 languages. And as you can see, we have both broad, global reach and localized support—both remote support (by phone or electronic communications) and onsite support when you need it. Plus, we have a state-of-the-art, proven set of communications, problem management and logistics processes and tools linking our teams and keeping things running efficiently and smoothly to get the right people, parts and information to you when and where you need them. Let’s begin with a look at our worldwide network of remote technical support centers and technical experts: Today we have 70 support centers around the world. Some are call centers, some are “Level 1” support centers with technical specialists who speak your local language, and others are staffed by “Level 2” technical specialists with deeper product knowledge and higher-level technical skills.. I want to highlight the fact that we also have a global, “last level” of support available to help solve even the most challenging problems and answer tough questions. The IBM support infrastructure includes access to the people who know technology like no one else—the people involved in its research and development. We have more than 3,000 scientists and engineers in nine global IBM research laboratories (shown on this map by orange dots): Almaden (San Jose, CA, USA) Austin (Austin, TX, USA) China (Beijing) Haifa (Israel) India (Delhi) Tokyo (Yamato, Japan) Watson (Yorktown Heights, NY, USA) Zurich (Rueschlikon, Switzerland) For more information, visit the IBM Web site at: And, we have 19 ‘major’ hardware and software product development laboratories (shown here by blue dots on the map) and access to another 97 product labs to ensure fast response to the most complex support issues. Lenovo’s Service Commitment “After the deal closes, IBM will continue to provide maintenance delivery on Lenovo’s behalf for an extended period pursuant to the terms of a five-year maintenance service agreement with IBM. Customers who originated contracts with IBM should not see a change in their maintenance support for the duration of the customer’s contract.” Source: 31

32 Virtualizated Storage Low-Cost Object Storage
Flexible: NeXtScale Mini-SAS Storage Expansion Natively expand beyond the node with the onboard mini-SAS port Simply choose available storage controllers Connect the nx360 node to a JBOD or storage controller of your choice. NFS Low Cost Block Storage Dense Object Storage Hadoop Analytics HPC Analytics Virtualizated Storage Secure Encryption Compression Block Storage Low-Cost Object Storage Hadoop 2 3 4 V3700 JBOD 1 V3700 + + = V7000 nx360 M5 mini-SAS port RAID controller mini-SAS cable DCS3700 JBOD Ideal for dense storage requirements Click name once for solution to appear Click 2nd time to make disappear and then to select different choice

33 Flexible: Dense Storage Customer
NeXtScale Chassis and nodes 24 x 2 Socket E v3 nodes per chassis Dual port 10G Ethernet 2x 1G Management Port SAS HBA w/6Gb External Connector Storage JBODs 60 Hot Swap Drives in 4U 6 JBODs/Rack 4G SAS NL Disks Pure JBOD, No Zoning Networking 1 x 64 Port 10G Ethernet (optionally 2 switches for redundancy) Uplinks Req’d 2 x 48 Port 1G Ethernet Switches Management (1x Dedicated + 1x Shared port) Connects to Nodes, JBODs, Chassis FPCs, and PDUs 1.44 Petabytes of Raw storage per rack! 33

34 Control beyond the server Powerful Energy Management
Flexible: Power efficiency designed into HW, SW and management Efficient Hardware Control beyond the server Powerful Energy Management 80 Plus Platinum power supplies at over 94% efficiency – saves 3-10% Extreme efficiency voltage regulation – saves 2-3% Larger, more efficient heat sinks require less air – saves 1-2% Smart sharing of fans and power supplies reduce power consumption – saves 2-3% Underutilized power supplies can be placed into a low power standby mode. Energy efficient turbo mode Less parts = less power Energy Star Version 2(1) Choice of air or water cooling No fans or auxiliary cooling required for water cooled version, saving power cost Pre-set operating modes - tune for efficiency, performance, or minimum power Chassis level power metering and control Power optimally designed for 1-phase or 3-phase power feeds Optional intelligent and highly efficient PDUs for monitoring and control Powerful sleep state(2) control reduces power and latency xCAT APIs allow for embedding HW control into management applications LSF Energy Aware features allows for energy tuning on the fly Platform software can target low-bin CPU applications to lower power on CPUs in mixed environments Platform Cluster Manager Adv. Edition can completely shut off nodes that are not in use Open Source monitoring tool friendly allows of utilization reporting Autonomous power management for various subsystems within each node. Three pillars of power savings. Smart HW – HW that is designed to consume less power Platinum is an industry standard for power supply design that assures that the PSU operates at specific efficiency levels. On top f that we pick power delivery components like the voltage regulation that delivers the power more efficiently. Just like BladeCenter and Flex we selected a design with shared power and cooling – this topology reduces the total number of parts needed for power/cooling solution – saving money in part cost and reducing the number of PSUs and fans therefore reducing power draw. S3 allows systems to come back into full production from low power state much quicker than a tranditional power on. In fact cold boot normally takes about 270 seconds, with S3 that happens in only 45 seconds. When you know a system will not be used due to time of day or state of job flow – why not send it into a very low power state to save power and when needed bring it back on line quickly. (1) Pending announcement of product (2) On select IBM configurations

35 Flexible: Energy Aware Scheduling
Optimize Power Consumption with Platform LSF® On Idle Nodes On Active Nodes Policy Driven Power Saving Suspend the node to the S3 state (saves~60W)** Idle for a configurable period of time. Policy windows (i.e., 10:00 PM – 7 AM) Site customizable to use other suspension methods Power Saving Aware Scheduling Schedule jobs to use idle nodes first (Power saved nodes as last resort) Aware of job request and wake up nodes precisely on demand Safe period before running job on resumed nodes Manual management Suspend, resume, history Power Saving Aware Scheduling Ability to set the node/core frequency for a specific job / application / user Set thresholds based on environmental factors – such as node temperature Energy Saving Policies** Minimize energy to solution Minimize time to solution by intelligently controlling CPU frequencies Collect the power usage for an application (AC and DC)** Make Intelligent Predictions Performance, power consumption and runtime of applications at different frequencies** ** Only available on IBM NeXtScale and iDataPlex

36 NeXtScale – Key Messages
SIMPLE The back is now the front—simplify management and deployment Get in production faster with Intelligent Cluster Optimized shared infrastructure without compromising performance “Essentials only” design

37 Simple: Making management and deployment simple
Competition NeXtScale Stay in front of the rack and see things better You don’t have to be in the dark It’s so dark in here Reduce service errors when maintaining systems Know what cable you are pulling Which cable do I pull? Quick access to servers One way we make NeXtScale simple is by making everything front access. Why front access? Well, it’s hard to see in the back of the rack where lighting is very dim. That makes it very difficult to locate a dense server in a row of dense racks – having everything in the front simplifies and reduces chances of mis-cabling or pulling the wrong server. Everything is right there in front of you – power button, cabling, alerts, node naming/tagging – very easy. Also, having the cables not clogging up the back of the rack is good for air flow and thus good for energy efficiency – the harder the fans work to move air through server the more power they consume. People that work in the data center will tell you that the front of the rack is a much more enjoyable environment to spend time in. It can easily be 30F cooler in front than in back. People can’t stay in the rear of the rack that long without getting uncomfortable. Also, the front of rack is less noisy than the rear due to fan noise. NeXtScale servers and chassis will not need any tools to service the hardware. With push of a button, a top cover can be removed, and servicing components such as hard drives, memory, and IO cards are as easy as simply pushing and pressing on levers to remove and replace components. This all makes for very simple management and deployment. Tool-less design speeds problem resolution Remove power in rear (from the right system) before pulling system out from the front Add/remove/power servers without touching the power cables 65-80ºF >100 ºF!!! Which aisle do you want to work in? Cold aisle Hot aisle 37

38 Simple: In Production Faster - ENI Client Example
Intelligent Cluster significantly reduces setup time resulting in getting clients’ into production by at least 75%1 faster than non-Intelligent Cluster offerings Solution Overview 1500 server nodes in 36 racks 3000 NVIDIA K20x GPGPU accelerators IFDR InfiniBand, Ethernet, GPFS Enclosed in cold-aisle containment cage #11 on June 2014 Top500 list Delivered fully integrated to the client’s center HW inside delivery and installation included at no additional cost TOP500 Linpack run successfully 10 days after first rack arrived All servers pre-configured with customer VPD in manufacturing Entire solution delivered, supported as 1 part number Full interoperability test and support One number to call for support regardless of component Included Interoperability Tested Yes HPL (Linpack) Stressed / Benchmarked in Mfg. IBM HW Break Fix Support Yes - All components Inside Delivery, HW Install. Bare Metal Customization Available at no charge Result Production Ready Actual client example. When installed it was the 6 largest SuperComputer in the world, largest in Europe, and largest Intel TOP500 entry. Even today it still ranks in the TOP10. Imagine getting a solution of this size up and running this quickly. The 75% reduction in time is a comparison to the time it would have taken them to receive in boxed servers and components, install in racks, cable, program, and start up. With IBM and Intelligent Cluster the racks all arrived nearly ready to power on. 1 comparison on install time for complete roll your own installation versus IBM Intelligent Cluster delivery

39 w/o Intelligent Cluster
Simple: Save Time and Resources with Intelligent Cluster IBM fully integrates and tests your cluster, saving time and reducing complexity Step Intelligent Cluster w/o Intelligent Cluster Move Servers to DC 15min 40min Install Rail Kits 30min Install Servers 2hr Cable Ethernet Cable IB Rack to Rack 1hr Power on Test 10min Program IMMs Program TOR Collect MAC & VPD 30 Provision HW Verification TOTAL TIME: 2-1/2 hr 9-1/2 hr SAVE ~7 hrs per rack! Another facet of simplicity is the time and resources that can be saved with Intelligent Cluster. With Intelligent Cluster, IBM handles the integration and testing of your cluster saving significant setup time and complexity during installation---getting clients into production at least 75% faster. We already talked about many of the details of shipping full systems on a previous slide, which are handled through Intelligent Cluster. In this way, IBM handles many tasks like installing the rail kits, servers, cabling, conducting power on tests and required programming, saving clients an average of 7 hours per rack. This is an area of simplicity and savings that every customer will appreciate.

40 Simple: The Advantages of Shared Infrastructure without the Contention
Shared Power Supplies and Fans 90% reduction in fans1 75% reduction in PSUs1 Each system acts as an independent 1U/2U server Individually managed Individually serviced and swappable servers Use any Top of Rack (TOR) Switch No contention for resources with in the chassis Direct access to network and storage resources No management contention Light weight/Low Cost chassis Simple mid plane with no active components No in chassis IO switching No left or right specific nodes High efficiency PSU and fans No unique chassis management required IBM pioneered the idea of shared power and cooling in x86 with BladeCenter in That design allowed our engineers to pull fans and power supplies off the individual servers and share them in the chassis – reducing the total number of parts, reducing cost, and reducing power consumption. This was a great thing and we still love it today so NeXtScale shares these important assets. Where NeXtScale differs from BladeCenter is around integration of the rest of the critical components like systems management and networking. BladeCenter uses a very cool midplane and Advanced Management Module (AMM) to route IO from server to switch and for management of the entire chassis. For many clients this is perfect. NeXtScale went in a different direction – much like iDataPlex the chassis has no unique management and no IO flowing through it. Everything is done at top of rack. Top of rack essentially means that all cables are routed to an external switch (usually but not always in the top of the rack). TOR designs allow a great deal of flexibility in choosing brands and types of IO. But there is external cabling involved – which approach is best? The integrated blade/flex route or NextScales route – well that depends on the client, their skill level, existing tools, etc… Same idea with management – BladeCenter and Flex gives you a single pane of glass for consolidated management and control – NeXtScale is designed to be used in existing management tools and protocols. A different approach – the best thing is that one of these two approaches should appeal to nearly all our clients. BladeCenter and Flex and NeXtScale are not competitive products – they complement one another very nicely – our job is to make sure client see the full story and help them decide which is the best approach. 1 typical 1U server with 8 fans and 2 PSUs

41 Simple: “Essentials Only” Design
Only includes the essentials Two production 1Gb Intel NICs; dedicated or shared 1Gb for management Standard PCI card support Flexible ML2/Mezzanine for IO expansion Power, Basic LightPath, and KVM crash cart access Simple ‘pull out’ asset tag for naming or RFID Not painted black  just left as silver Clean, simple, and low cost Blade like weight/size – rack server like individuality/control NeXtScale delivers basic, performance centric IT “I can’t see my servers, don’t care what color they are” “I don’t use 24 DIMMs why pay for a system to hold them?” “I only need RAID mirror for OS don’t want extra HDD bays” “I only need a few basic PCI/IO options” This is a close up of the front of a NeXtScale nx360 M5. What’s here? All the essential items you need in a server – NICs, BMC access, Power, alerts, and IO access. Its even got a nifty pull out tab at the bottom of the node for labeling or asset tagging and a crash cart access port. You will notice this server is not black. Well most of these systems are not going to be out next to peoples desks, they will be locked away in the data center away from sight. Also going back to our race car design approach – how much more performance do we get by painting it black? None right, so we chose to not paint it and leave the corrosion resistant metal out there as proof this is a different kind of server. All that said you have to admit it still looks pretty cool doesn’t it? NeXtScale will not be for everyone. We took specific steps to take out function to improve efficiency or lower costs. It will be important to understand the work load to make sure that NeXtScale is the right choice. If not don’t worry the entire System x line up is waiting for you. NeXtScale nx360 M5

42 Introducing IBM NeXtScale System with Water Cool Technology
2014 LENOVO INTERNAL. All rights reserved.

43 3 Ways to Cool your Datacenter
Air Cooled Rear Door Heat Exchangers Direct Water Cooled 100% water cooled No fans or moving parts in system Most energy efficient datacenter Most power efficient servers Lowest operational cost Quieter due to no fans Run processors in turbo mode for max performance Warm water cooling means no expensive chillers required Good for geographies with high electricity cost Standard air flow with internal fans Good for lower kW densities Less energy efficient Consumes more power – higher OPEX Typically used with raised floors which adds cost and limits airflow out of tiles Unpredictable cooling. Hot spots in one area, freezing in another Air cool, supplemented with RDHX door on rack Uses chilled water Works with all IBM servers and options Rack becomes thermally transparent to data center Enables extremely tight rack placement There are 3 basic ways to cool a datacenter. This can be done with standard Air Cooling, or with Rear Door Heat Exchangers, or as Direct Water Cooled. The Air Cooled method utilizes standard air flow with internal fans. While allowing for the broadest selection of supported server functions, it is not as energy efficient as the water cooling options resulting in higher operational expense and unpredictable cooling in the datacenter. Rear Door Heat Exchangers can be added to air cool racks to provide supplemental heat removal using chilled water. This allows more efficiency and density within the datacenter. Finally, Direct Water Cooled systems use 100% water cooling directly to the server with no internal fans. It is the most energy efficient of the cooling options, meaning even lower operational cost in the datacenter, especially important in geographies with high electricity cost. Other benefits include a quieter datacenter environment due to no fans, and improved performance with optimal and continuous support of Turbo mode. NeXtScale now offers all 3 of these options for customers. I’ve already described what’s new in our air cooled server. Next, let’s take a look at the new water cool version that is being introduced into the NeXtScale family this year. 43

44 More Power Efficient Server2
NeXtScale System with Water Cool Technology Achieve extreme scale with ultimate efficiency 40% More Energy Efficient Data Center1 Requires no auxiliary cooling³ No chillers required due to warm water cooling (up to 45 C)3 10% More Power Efficient Server2 No Fans required for compute elements Small power supply fans only Lower operational costs, and quieter NeXtScale System with Water Cool Technology drives extreme efficiency into the datacenter, making for a greener environment. Water Cool Technology enables up to 40% greater energy efficiency in the data center, and 10% greater power efficiency at the server level compared to air-cool solutions. This is largely due to the absence of fans for the compute nodes, which not only saves power but also makes for a much quieter environment. Over 85% of the heat in the system is recovered by water cooling and can then be re-used for other purposes such as heating facilities and buildings. There is also a significant advantage in that water inlet temperatures up to 45 degrees Celsius are supported. This makes expensive water chillers unnecessary, saving significant capital expense. The operational cost savings associated with Water Cool Technology result in quick payback of initial investment and continued savings for lower Total Cost of Ownership. This is particularly essential in geographies with high electricity costs or high cost of floor space Going beyond the efficiency benefits, Water Cool Technology improves performance as well. It enables processors to run continuously in Turbo mode, allowing them to run at higher frequencies.. 85% Heat Recovery by Water Re-use warm water to heat other facilities Run processors at higher frequencies (Turbo mode) Water Cool Technology Based on comparisons between air-cooled IBM iDataPlex M4 server and water-cooled iDataPlex M4 servers LRZ, a client in Germany DC numbers Geography dependent 44

45 NeXtScale nx360 M5 WCT Dual Node Compute Tray
Water Cool Compute Node nx360 M5 WCT Compute Tray (2 nodes) 2 compute nodes per full wide 1U tray Water circulated through cooling tubes for component level cooling Dual socket Intel E v3 processors (up to 18C) 16x DIMM slots (DDR4, 2133MHz)) InfiniBand FDR support via choice of ConnectX-3 ML2 adapter or Connect IB PCIe adapter Onboard GbE NICs Labeling tag 1 GbE ports Power, LEDs PCI slot for Infiniband Dual-port ML2 (IB/Ethernet) Simple architecture System infrastructure CPU with liquid cooled heatsink 16x DIMM slots Cooling tubes NeXtScale System with Water Cool Technology, a new addition to the System x family, uses an innovative direct water-cooling design to more efficiently cool system components such as processors, memory and I/O. This includes 3 new components---a new compute tray, a new enclosure, and a manifold assembly. The Water Cool Compute tray, called nx360 M5 WCT, is a full-wide, 1U tray that holds 2 compute nodes. Based on the same planar technology as air cool, these nodes also support the E v3 processors, 16x slots of DDR4 memory, and Infiniband FDR support. However, rather than using fans, water is circulated through cooling tubes within the server to cool the processors, memory DIMMs, IO, and other heat producing components, supporting water inlet temperatures up to 45 degrees Celsius, making expensive water chillers unnecessary. x16 ML2 slot Water Outlet PCI slot for Connect IB x16 ML2 slot Water Inlet PCI slot for Connect IB PCI slot for Connect IB 45

46 nx360 M5 WCT Compute Tray (2 nodes) – Front Panel
2 compute nodes per tray  6 trays per 6U chassis (12 servers) Dual x16 ML2 slot supports InfiniBand FDR (optional) PCIe adapter support for Connect IB or Intel QDR (optional) GbE dedicated and GbE shared NIC Front Panel* Node #1 Node #2 Dual-port ML2 (IB/Ethernet) KVM Connector LEDs (Power, Location, Log, Error) PCI slot for Infiniband Dual-port ML2 (IB/Ethernet) PCI slot for Infiniband Here is a view of the front of the water cool compute tray. As you can see, this includes 2 identical side-by-side compute servers with the same front access features, such as dual x16 ML2 slots for FDR, GbE NIC ports, and LEDs. nx360 M5 WCT Compute Tray Specifications: Form Factor: Full-wide node with two planar CPU/Chipset: (Per planar) Dual CPU / Socket R3 Processors: Intel Haswell-EP 12/14/16/18 Core Chipset: Intel Wellsburg QPI (Quick Path Interconnect) TPM1.2 Chip down NIC: (Per planar) Broadcom 5717 NIC C-step 10Gbit support on ML2 slot Memory: (Per planar) 16x DIMM sockets – DDR4 2133MHZ 256GB assuming 16G DIMM Memory mirroring and sparing supported. Flash DIMM (supported in Ref models) BMC: (Per planar) IMMv2.1 with remote presence support option. SH7758 with video NM 3.0 Card Slots: (Per planar) One x16 ML2 slot (support 50mm height only) Options: ML2 Riser FoD on selected options ML2 FDR Adaptor System Management / Software IMM 2.1 Update Xpress, Dynamic System Analysis w/ Integrated RT Diags Server Guide. Power Energy Star 2.0 Supports power supplies for current Fei-hu 1GbE/ Shared NIC KVM Connector 1GbE/ Shared NIC Power, LEDs Rear View Water Inlet Water Outlet * Configuration dependent. Configuration includes Ml2 and PCI adapters.

47 NeXtScale n1200 WCT Enclosure – Water Cool Chassis
IBM NeXtScale n1200 WCT Enclosure Water Cool Chassis 6U Chassis, 6 bays Each bay houses a full wide, 2-node tray (12 nodes per 6U chassis) Up to 6x 900W or 1300W power supplies N+N or N+1 configurations No fans except PSUs Fan and Power Controller Drip sensor and error LEDs for detecting water leaks No built in networking No chassis management required Front View shown with 12 compute nodes installed (6 trays) Simple architecture System infrastructure The n1200 WCT Enclosure is a new 6U chassis for water cooling containing 6 full-wide bays, each holding one of the full-wide compute trays. This allows 12 compute servers per 6U chassis, same as we have for air cool. It also still includes the 6 power supplies, but no fans are required at all, making for a quieter and lower power environment. Rear fillers/EMC shields Fan and Power Controller Rear fillers/EMC shields 3x power supplies 3x power supplies Rear View 47

48 nx360 M5 WCT Manifold Assembly
Manifolds deliver water directly to and from each of the compute nodes within the chassis via water inlet and outlet quick connects. Modular design enables multiple configurations via sub-assembly building block per chassis drop. 6 models: 1, 2, 3, 4, 5 or 6 chassis drops The WCT Manifold is what delivers the water to each of the nodes from the CDU. Each manifold section attaches to a chassis and connects directly to the water inlet and outlet connectors for each compute node in order to safely and reliably deliver water to an from each server. The manifold is very modular and comes in multiple configuration based on the number of chassis drops required in the rack. Anywhere from 1 to 6 chassis can be supported in a single rack. n1200 WCT Chassis 6 drop Manifold Single Manifold Drop (1 per chassis)

49 NeXtScale M5 Product Timeline
A Lot More Coming Refresh 1 GA: Jan 2015 Announce: Sept. 8, 2014 Shipments Begin: Sept. 30, 2014 GA: Nov. 19, 2014 More Storage More Accelerators More IO Options Next Gen Processors 4 GPU / 4 HS drive tray EDR support Broader SW Ecosystem OS Support 8 additional processors NVIDIA K80 support Storage NeX 12Gb Support Mini-SAS port -48VDC power supply Currently Shipping nx360 M5 compute node (air cool and water cool) Supports existing 6U chassis (air) New 6U chassis (water) 14 processor SKUs PCI NeX support (GPU/Phi) n1200 Chassis NeXtScale is an architecture that will continue to develop for many years. We are currently shipping a dense chassis with choice of compute, storage, and GPU/PCI nodes. In November, we will begin shipping the new M5 compute node as well as the water cool version of NeXtScale with new compute server and chassis. By the end of this year, we will add support for more processors, GPUs, storage tray, and -48 DC power supply. Next year and beyond, we are looking forward to a lot more coming. More storage, more accelerators, more IO innovations, and next generation processors will all come to market and build out an even more powerful and flexible ecosystem. nx360 M4 Storage NeX PCIe NeX

50 NeXtScale Solutions 2014 LENOVO INTERNAL. All rights reserved.

51 IBM Intelligent Cluster™
Application Ready Solutions simplifies HPC, speeds delivery Developed in partnership with leading ISVs, based on reference architectures IBM Platform™ LSF® Accelerators Grid Cloud Workload management platform, Intelligent policy-driven scheduling features Cluster Networking IBM Platform Symphony Run compute and data intensive distributed applications on a scalable, shared grid Storage IBM has rich portfolio of hardware and management software products specifically designed for high performance computing. From these components, (1) we created solutions to help increase competitive advantages by accelerating product design. (2) The solutions are optimized with lower total cost of ownership (3) They are tailored in specific application areas to provide integrated and end-to-end experiences (4) These solutions are designed for maximize user productivity and minimize administration cost Backup: IBM is committed to making HPC available for broad adoption - without sacrificing performance or scalability. We help our clients by providing integrated solutions that lower costs by maximizing utilization and productivity in your environment. Applications IBM Platform HPC Compute nodes Out-of-the-box features reduce complexity of HPC environment IBM NeXtScale System™ IBM Platform Cluster Manager IBM Intelligent Cluster™ Quickly and simply provision, run, manage, monitor HPC clusters 51

52 Next Generation Sequencing
IBM Application Ready Solution for CLC bio Accelerate time to results for your genomics research “It has been a pleasure to work with IBM, optimizing our enterprise software running on the IBM Application Ready Solution for CLC bio platform. We are proud to offer this pre-configured, scalable high-performance infrastructure with integrated GPFS to all our clients with demanding computational workloads.” - Mikael Flensborg, Director of Global Partner Relations CLC bio, A QIAGEN® Company Easy to use, performance optimized solution architected for CLC bio Genomic Server and Workbench software Client support for increased demand for genomics sequencing Drive down cost, speed up the assembly, mapping and analysis involved in the sequencing process with integrated solution Modular solution approach enables easy scalability as workloads increase Learn more: solution brief, reference architecture Use Case Next Generation Sequencing Workload size 15 human genome (37x) or 120 human exome (150x) per week 30 human genome (37x) or 240 human exome (150x) per week 60 human genome (37x) or 480 human exome (150x)/wk Head Node - x3550M4 Single Dual Compute – # x240 or nx360 nodes 3 6 12 Compute - Disk per Node 2 TB Compute - Memory per Node 128 Storwize V7000 Unified (TB) 20 55 90 10 Gigabit switch / adapters yes Management Software IBM Platform HPC, Elastic Storage (based on GPFS) These configurations / sizings are based on IBM Flex System x240 with Intel IVB processors (2 x Intel Xeon E5-2680v2 115W 2.8GHz). Currently the reference architecture is based on sandybridge processors, but work is underway to update the reference architecture to match the IVB configs in this table and in the IBM CLC bio solution brief published Dec 2013. 52

53 IBM Application Ready Solution for Algorithmics Optimized high-performance solution for risk analytics Easy to use, performance optimized solution architected for IBM Algorithmics Algo One solution Supported software: Algo Scenario Engine Algo RiskWatch Easy to deploy integrated, high-performance cluster environment Based on “best practices” reference architecture, lowers risk User friendly portal provides easy access , control of resources “Many firms will benefit from the Application Ready Solution for Algorithmics to accelerate risk analytics and improve insight . This solution helps lower costs and mitigate IT risk by delivering an integrated infrastructure optimized for active risk management”. — Dr. Srini Chari, Managing Partner, Cabot Partners Algo Credit Manager Algo Risk Application Algo Aggregation Services (Fanfare) Read the analyst paper Use Case Algo One Risk Analysis Workload Small Medium Large Management Node for Algo / PCM 1 1 (can be shared) Management Node for Symphony 2 (shared) Compute Server – x240 or nx360 6 14 36 (or more) Compute Cores (total) 96 224 574 Compute – Total memory (GB) 768 1792 4608 Elastic Storage (GPFS) Servers None 2 Storage – V3700 SAS; shared stg GPFS 31 TB GPFS, 62 TB GPFS, 124 TB FDR IB switch / adapters Network 10 GbE Software IBM Platform Cluster Manager (PCM), IBM Platform Symphony, Elastic Storage, DB2 enterprise (opt.) This reference architecture is based on Flex System x240 of IBM NeXtScale System with Intel IvyBridge processors (E V2). As you can see there are three different configurations for this solution, for Algo risk scenarios in various levels of complexity (simple, medium, and high) as the performance/size of the cluster depends on the response time required along with the the number of potential positions held and their dependencies on each other. This solution assumes Storwize V3700 SAS attached storage with GPFS file and data management clients and servers. 53

54 IBM Application Ready Solution for ANSYS Simplified, high-performance simulation environment
Computational Fluid Dynamics (Fluent, CFX) Structural Mechanical (Ansys) Workload size 1 job 15+M cells using all 120C 6 jobs 2.5+M cells each 1 job 25+M cells each using all 200C 10 jobs 2.5+M cells each 1 job 200+M cells using all 840C 20 jobs 10+M cells each 4 large jobs 2–5 MDOF each 10 large jobs 2–5 MDOF each 15 large jobs MDOF each Head Node – nx360 M4 Single Compute – nx360 M4 Quantity 6 10 42 Processor E v2 10C E v2 10C E v2 8C Memory 128 256 Disk Diskless 2 x 800 GB SSD 2x800 GB SSD GPU Node* – PCI NeX GPU 2 x NVIDIA K40 2 x K40 Visualization* - NVIDIA GRID K2 K2 2 x K2 File System* DS3524 yes Network Gigabit FDR IB no Management Software Platform HPC or Platform LSF; Elastic Storage (GPFS File System (opt)) * Optional Configuration shown is based on IBM NeXtScale System™. IBM Flex System™ x240 with E V2 Compute nodes is also available. Both systems are available to order as IBM Intelligent Cluster™. To Learn more: read the solution brief and reference architecture.

55 Call to Action Lead with NeXtScale on all x86 Technical Computing (HPC) opportunities Look for NeXtScale opportunities in Cloud Computing, Datacenter Infrastructure, Data Analytics, and Virtual Desktop Evaluate customer’s energy efficiency requirements to assess if Water Cooling is appropriate for them Utilize customer seed systems Learn more about NeXtScale from the links on the Resources page So here’s a few points I want to make as a Call to Action: First is to lead with NeXtScale on all of your x86 HPC opportunities. Also look for opportunities where NeXtScale can be used in public and private cloud, as well as datacenter infrastructure, data analytics, and virtual desktop. In fact, NeXtScale is now our lead vehicle for 3D VDI opportunities. Make customers aware of our new water cool offering and learn more about their efficiency requirements. Water Cool Technology might be right for them Next, please engage us for seed systems for NeXtScale M5. We do have a loaner pool established for NeXtScale M5 and we will be more than happy to send seeds as needed to your clients. If you have a unique bid that requires some analysis on Power and HVAC data, please engage Lab Services for help on these unique requirements. Also, don’t forget to leverage BidWinRoom it’s a great forum that will help you take the cost out of your total solution. And again, learn more about NeXtScale from the links on the previous Resources page. There you will find a sales kit for IBMers and business partner sellers. There is a lot of useful content out there for you to know more about the NeXtScale offerings.

56 IBM NeXtScale M5 – Resources / Tools
Product Resources: Announcement Page Link Announcement Webcast (replay) Link Product Page Link Data Sheet Link Product Guide Link Virtual Tour Air Cool Link Water Cool Link Product Animation Link Infographic IBM Blog Link Benchmarks: SPEC_CPU NeXtScale nx360 M5 with E v3 Link SPEC_CPU NeXtScale nx360 M5 with E v3 Link Sales Tools: Sales Kit IBM PW Seller Presentation IBM PW Client Presentation IBM PW Sales Education IBM PW Technical Education IBM PW Seller Training Webcast: NeXtScale M5, GSS Link VITO Letters IBM PW Quick Proposal Resource Kit Link Client Videos: Caris Life Sciences Link Hartree Centre Link Univ of Notre Dame Link  Analyst Papers Cabot Partners: 3D Virtual Desktops by Perform Link Intersect360: Hyperscale Computing- No Frills Clusters at Scale Link 56

57


Download ppt "Step up, Scale out with NeXtScale System Delivering Insight Faster…."

Similar presentations


Ads by Google