Presentation on theme: "Step up, Scale out with NeXtScale System Delivering Insight Faster…."— Presentation transcript:
1 Step up, Scale out with NeXtScale System Delivering Insight Faster…. System x WW Marketing, Hyper Scale Computing Solutions − October 20142014 LENOVO INTERNAL. All rights reserved.
2 Agenda NeXtScale Overview Introducing IBM NeXtScale System M5 NeXtScale FamilyClient BenefitsIntroducing IBM NeXtScale System M5M5 EnhancementsTarget Market SegmentsMessaging: Scale, Flexible, SimpleNeXtScale with Water Cool TechnologyTimelineSolutions“IBM delivered server hardware of exceptional performance and provided superior support, allowing us to rapidly integrate the system into our open standards based research infrastructure. Complementing the technical excellence of NeXtScale System, IBM has a long track record in creating high-performance computing solutions that gives us confidence in its capabilities...”—Paul R Brenner, Associate Director, HPC The University of Notre Dame, Indiana“In my 20 years of working with supercomputers, I’ve never had so few failures out of the box. The NeXtScale nodes were solid from the first moment we turned them on..—Patricia Kovatch, Associate Dean, Scientific Computing,Icahn School of Medicine at Mount SinaiThis slide shows the agenda for today’s discussion. We will cover a brief overview of NeXtScale basics and client benefits. Then we will introduce the new NeXtScale M5 product and what new features are important for our target market segments. Then I will tell you about the NeXtScale messaging will give you examples of how we are providing the stated benefits to our customers. We will then take a deeper dive into NeXtScale with Water Cool Technology, followed by the product timeline, and a little bit about solutions.“Hartree Centre needed a powerful, flexible server system that could drive research in energy efficiency as well as economic impact for its clients. By extending its IBM System x platform with IBM NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable energy use and help its clients gain a competitive advantage..”—Prof. Adrian Wander, Director of the Scientific Computing Department, Hartree Centre2
3 High Performance Computing Introducing IBM NeXtScale System M5Modular, high-performance system for scale-out computingStandard RackChassisPrimary WorkloadsHigh Performance ComputingComputeLow Cost Chassis provides only power and coolingDense High Performance ServerDense Storage Tray (8 X 3.5” HDDs)Dense PCI Tray (2 x 300W GPU/Phi)Standard 19” RacksTop of Rack switching and choice of fabricOpen Standards Based tool kit for deployment and managementStorage*NeXtScale is an x86 offering that debuted last year, introducing a new category of dense computing, designed for high performance, density, and scale out for HPC and Cloud environments. With rapidly changing market and demands, our clients continue to look for new things from their IT hardware. NeXtScale was designed with the innovation and flexibility to provide the optimal solution to customers for a variety of use cases and workloads.The concept is pretty simple but the result is very powerful – a 6U chassis designed to hold 12 half-wide bays. These bays are 1U tall like a traditional server but only take up half the rack width allowing us to pack in 2 times the number of servers per U space than for a 1U. This design not only supports dense, full performance compute servers, but it also supports something called Native Expansion, which allows you to add storage or accelerators (like GPU’s or Xeon Phi) to the solution. This enables you to deliver a very simple and cost effective base server that can be expanded to create very rich and dense storage or acceleration solutions. And all this power of NeXtScale has been designed to fit into a standard rack.This architecture is expanding our market into some of the fastest growing x86 segments. Our first generation of NeXtScale has carried forward many of the great benefits of iDataPlex and extended it beyond HPC with more flexibility, more performance, greater density, optimized networking, and installation in standard racks. Now as we move to the next generation of NeXtScale, we are building on this very innovative design to expand to even more users with a larger array of features and benefits.Acceleration* M5 support to be available with 12Gb version at Refresh 1
4 Deliver Insight Faster EfficientReliableSecureNeXtScale System provides the scale, flexibility and simplicity to help clients solve problems fasterScaleFlexibleSimpleOur clients expect us to provide solutions that are Efficient, Reliable, and Secure. These are key attributes that are true across the entire System x M5 product family. In addition, customers need to solve problems faster than ever before. NeXtScale makes this possible by providing a system with Scale, that is Flexible, and is Simple to manage.With Scale, you can create a solution at any size, large or small, and achieve great economies of scale and efficiency, and impact per dollar. With Flexibility, you can build the product nearly any way you want it tailored to client needs, from the choice of compute, storage or accelerators---to the choice of hot swap or simple swap drives---to the choice of choice of IO on PCI or mezzanine, and now even the choice of air or water cool. And you get all this scalability and flexibility while keeping everything very Simple, with a single architecture, optimized for many use cases, that is easy to manage and grow.Smart delivery of scale yields better economics and greater impact per $$Significant CAPEX and OPEX savings while conserving energyCreate a system tailored to precisely meet your need nowProvides the ability to adapt rapidly to new needs and new technologyDrive out complexity with a single architectureRapid provisioning,easy to manage,seamless growth
5 Introducing IBM NeXtScale System M5 2014 LENOVO INTERNAL. All rights reserved.
6 New Compute Node fits into existing NeXtScale infrastructure One Architecture Optimized for Many Use CasesChassisNeXtScale n1200 EnclosureAir or Water Cool TechnologyStorage NeX node*nx360 M5 + Storage NeXPCI Nex node (GPU / Phi)nx360 M5 + PCI NeXNewCompute nodeIBM NeXtScale nx360 M5The NeXtScale family includes of a variety of functional components that fall under a single architecture but provide the ultimate in flexibility and performance. The NeXtScale family consists of the 6U, 12-bay chassis that supports a choice of Compute node, Storage NeX node, or PCI NeX node (for GPUs or Xeon Phi)—in a mix and match combination if desired.Native Expansion means that we can add function and capabilities seamlessly to the basic node. No need for exotic connectors or unique components or high speed back or mid planes. NeXtScales Native Expansion capability adds HDDs to the node with a simple storage NeX (tray) + standard RAID card, SAS cable, and HDDs. For adding GPUs to a node its just as simple – start with a PCI NeX add a 2U PCI riser and a passively cooled GPU from nVidia or Intel and you have a powerful acceleration solution for HPC, virtual desktop, or remote graphics.With the new M5 version, we are introducing a new compute node, called the IBM NeXtScale nx360 M5. This compute node is a new half-wide, dual-socket server based on Grantley architecture from Intel and also includes several more highly valued features. The beauty of the NeXtScale design is that users can get the benefits of the M5 (Grantley) platform simply by ordering the new compute node, while still leveraging the existing chassis and NeX nodes. The existing chassis and Native Expansion nodes that were built for the M4 server will continue to work with the new M5 compute node. This allows investment protection for existing users, and future product stability for new users.Dense ComputeTop PerformanceEnergy EfficientAir or Water Cool TechnologyInvestment ProtectionAdd RAID card + cableDense 32TB in 1UUp to 8 x 3.5” HDDSSimple direct connectMix and MatchAdd PCI riser + GPUs2 x 300W GPU in 1UFull x16 Gen3 connectMix and Match*M5 support to be available with 12Gb version at Refresh 1
7 Faster compute performance1 NeXtScale System M5 EnhancementsIncorporates Broad Customer RequirementsWhat’s New50% more cores and up to 39% faster compute performance* with Intel Xeon E v3 processors (up to 18 core)Double the memory capacity with 16 DIMM slots (2133MHz DDR4 up to 32GB)Double the storage capacity with 4x 2.5” drivesHot swap HDD optionNew RAID slot in rear provides greater PCI flexibilityx16 Gen3 ML2 slot supports InfiniBand / Ethernet adapters for increased configuration flexibility at lower price (increase from x8)Choice of air or water coolInvestment protection – chassis supports M4 and M5Key Market SegmentsHPC, Technical Computing, Grid, Cloud, Analytics, Managed Service Providers, Scale-out datacentersDirect and Business Partner enabled solutions39%Faster compute performance150%More Cores22XMemory Capacity314%Faster Memory4All NewHot Swap HDD52XHard Drives5Here are the new key features and benefits for NeXtScale M5, which were driven by customer requirements from our target market segments.With Intel’s new Grantley architecture, we are achieving 39% faster compute performance and 50% more cores with the E v3 processors.We’re doubling the memory capacity (up to 512GB per node) due to increased # of DIMM slots, and we’re supporting faster DDR4 memory.We’re also adding an option for 2 front access Hot Swap drives, doubling the internal hard drive capacity.PCI and IO flexibility is being increased through the addition of a front ML2 mezzanine and the addition of a dedicated PCI slot for RAID.And we now have the choice of air and water cool versions tooFootnotes:Preliminary Intel benchmarks on Haswellnx360 M5 includes Intel Xeon E5-26xx v3 (up to 18C) versus current Intel Xeon E5-26xx v2 (up to 12C)nx350 M5 includes 16 DIMM slots versus 8 DIMM slots in current platformnx360 M5 includes DDR4 memory DIMMs running at 2133MHz versus 1866Mhz in current platformnx360 M5 includes option for 2x hot swap 2.5” HDD in front, which brings total HDD capacity to 4 2.5” HDD per node.nx360 M5 includes x16 Gen3 ML2 mezz vs. x8 mezz in current platform.nx360 M5 includes 3 PCI slots vs. 2 in current platformFullGen3 x16 ML2650%More PCI Slots7Choice ofAir or Water Cool7
8 Target Segments - Key Requirements Data CenterInfrastructureCloud ComputingKey Requirements:Mid-high bin EP processorsLots of memory (>256GB/node) for virtualization1Gb / 10Gb Ethernet1-2 SS drives for bootKey Requirements:Low-bin processors (low cost)Smaller memory (low cost)1Gb Ethernet2 Hot Swap drives (reliability)High PerformanceComputingKey Requirements:High bin EP processors for maximum performanceHigh performing memoryInfiniBand4 HDD capacityGPU supportHere are our target markets for NeXtScale. NeXtScale’s design is centrally focused on High Performance Computing, which requires the highest performing processors, memory, and I/O to achieve maximum performance per dollar. However, many of NeXtScale’s features provide key benefits to other segments as well, such as Cloud Computing, Data Center Infrastructure, Data Analytics, and Virtual Desktop.For instance, Cloud Computing and Virtual Desktop demand significantly more memory capacity for virtualization capability. On the other hand, data centers need less processing power and memory to save on costs, but require hot swap drives for improved reliability. Data Analytics demand both strong processing power and memory capacity.With the M5 version, NeXtScale is adding more features that meet these varied requirements making it even more important for a broader range of segments and workloads. We’ll return to this again later.Data AnalyticsVirtual DesktopKey Requirements:Mid-high bin EP processorsLots of memory (>256GB per node)1Gb / 10Gb Ethernet1-2 SS drives for bootKey Requirements:Lots of memory (> 256GB per node) for virtualizationGPU support
9 NeXtScale M5 addresses segment requirements Data CenterInfrastructureCloud ComputingNeXtScale M5 provides:Intel EP (mid- to high bin)Up to 36 cores / nodeUp to 512 GB memory / nodeEthernet (1 /10 Gb), PCIe, ML2Broad range of 3.5”, 2.5” HDDs2 front hot-swap drivesNeXtScale M5 provides:Intel EP (low bin)Up to 36 cores / nodeLow cost 4 / 8 GB memoryOnboard Gb Ethernet std.2 front hot-swap drives (2.5”)Integrated RAID slotHigh PerformanceComputingNeXtScale M5 provides:Intel EP (high bin)Up to 36 cores per nodeFast memory (2.1 GHz), 16 slotsFDR ML2 InfiniBand, future EDRBroad range of 3.5’ HDDs4 int. 2.5’ HDD, 2 hot swapUp to 2 x GPUs per 1uLet’s return to our Target Segments again, and we can now see how the new NeXtScale M5 features align with the requirements from each of these segments.We’re addressing the price/performance needs of HPC with higher core counts and processor performance, as well as staying time to market with InfiniBand and GPU support.The increased memory capacity benefits Cloud Computing, Data Analytics, and Virtual Desktop.And the front hot-swap drives and low bin processor support addresses the reliability and cost concerns of the Data Centers.NeXtScale M5 now provides the optimal set of features that more effectively meet the requirements of High Performance Computing, while extending to adjacent markets as well.Data AnalyticsVirtual DesktopNeXtScale M5 provides:Intel EP (mid- to high bin)Up to 36 cores / nodeUp to 512 GB memory / nodeEthernet (1 /10 Gb), PCIe, ML2Broad range of 3.5”, 2.5” HDDsNeXtScale M5 provides:Choice of ProcessorsUp to 512 GB memoryUp to 2 x GPUs per 1U
10 IBM NeXtScale nx360 M5 – The Compute Node nx360 M5 ServerIBM NeXtScale nx360 M5 Server½ Wide 1U, 2 socket serverIntel E v3 processors (up to 18C)16x DIMM slots (DDR4, 2133MHz)2 Front Hot-Swap HDD option (or std PCI slot)4 internal HDD capacityNew, embedded RAID PCI slotML2 mezzanine for x16 FDR and EthernetNative expansion (NeX) support – storage and GPU/PhiRAID SlotDrive bay(s)x16 PCIe 3.0 slot16x DIMMsDual-port ML2 x16 mezzanine card (IB/Ethernet)Simple architectureE v3 CPUSystem infrastructureNow let’s look closer at the new compute note. The NeXtScale nx360 M5 server is a half-wide, dual socket server node designed for data centers that require high performance but are constrained by floor space. By taking up less physical space in the data center, the NeXtScale server significantly enhances density.The M5 version supports Intel Xeon E v3 processors (up to 145W and 18-core) thus providing more performance per server. It also includes new feature enhancements, including 16 DIMM slots, front hot swap drives, dedicated RAID slot, and a x16 ML2 mezzanine for Infiniband or Ethernet.While very powerful, the nx360 M5 compute node contains only essential components in the base architecture to provide a cost-optimized platform.x24 PCIe 3.0 slotKVM connector1 GbE portsOptional Hot Swap HDD or PCIe adapterPower, LEDsSupported on same chassis as M4 version10
11 IBM NeXtScale nx360 M5 Server 4x 2.5” drives supported per node2 Hot-Swap SFF HDDs or SSDsChoice of:Hot Swap HDD OptionorStd full-high, half-length PCIe 3.0 SlotThe NeXtScale nx360 M5 compute node provides a choice of 2 front hot-swap 2.5” hard drives or PCIe slot. This provides the flexibility for additional easy access storage or PCI capability, whichever you prefer.Also available from the front panel of the node are optional dual port x16 ML2 slots, KVM connector, dual GbE ports, Power, LEDs, and even a nifty pull out tab at the bottom of the node for labeling or asset tagging.PCI slot OptionDual-port x16 ML2 mezzanine card InfiniBand / EthernetKVM Connector1 GbE portsDedicated or shared mgm’tPower, LEDsLabeling tagfor system naming, asset tagging11
12 Investment Protection - Chassis supports M4 or M5 Nodes n1200 EnclosureIBM NeXtScale n1200 EnclosureBay 11Bay 126U Chassis, 12 bays½ wide component supportUp to 6x 900W or 1300W power supplies N+N or N+1 configurationsUp to 10 hot swap fansFan and Power ControllerMix and match compute, storage, or GPU nodesNo built in networkingNo chassis management requiredMix and match M4 and M5 air cool nodes¹Bay 9Bay 10Bay 7Bay 8Bay 5Bay 6Bay 3Bay 4Bay 1Bay 2Front ViewRear ViewOptimized shared infrastructureSystem infrastructureThe chassis is not new for M5. The existing n1200 chassis (from M4 generation) will continue to be compatible with the nx360 M5 server, with nothing more needed than a firmware update for the fan and power controller.As a quick review, the NeXtScale n1200 enclosure is an efficient, 6U 12 bay chassis with no built-in networking or switching capabilities, and requiring no chassis level management. It is designed to provide shared high-efficiency power and cooling. With choice of six 900W or 1300W power supplies, three on each side, it is optimal for three-phased power. There are 10 hot swap fans to keep the system running cool. There is also a fan and a power controller in the back of the chassis so you can manage power and fan speeds of the systems.The n1200 chassis is designed to scale client’s business needs, with flexibility to add computing, storage or acceleration capability—which is as simple as adding specific nodes to the chassis. Because each node is independent and self-sufficient, there is no contention for resources among nodes within the enclosure.Also this chassis is very dense. While a typical rack holds only 42 1U systems, this chassis doubles the density up to 84 compute nodes within the same footprint.Fan and Power Controller3x power supplies3x power supplies5x 80mm fans5x 80mm fans12
13 NeXtScale - Choice of Air or Water Cooling IBM NeXtScale SystemAir CoolWater Cool TechnologyYourChoiceInnovative direct water coolingNo internal fansExtremely energy efficientExtremely quietLower powerDense, small footprintLower operational cost and TCOIdeal for geographies with high electricity costs or space constraintsAir cooled, internal fansFits in any datacenterMaximum flexibilityBroadest choice of configurable options supportedSupports Native Expansion nodesStorage NeXPCI NeX (GPU, Phi)One of the key differences with the new M5 version is that we have the choice of Air or Water cooling. The air cooled compute node has been updated from the M4 version with the enhancements previously discussed and is a good choice for configuration flexibility and broad choice of options and Native Expansion nodes. We are also announcing a new NeXtScale version based on innovative Water Cool Technology. This is direct water cooling, direct to the server, with no internal fans. This solution is extremely efficient, quiet, low power, and dense. It is ideal for “green” environments and driving lower operational costs and Total Cost of Ownership Both of these solutions are based on the same core design and are very complementary, able to be included within the same cluster, depending on customer’s cooling requirements. We’ll discuss each of these further in the next few slides.
14 IBM NeXtScale System with Water Cool Technology (WCT) Water Cool Node & Chassisnx360 M5 WCT Compute Tray (2 nodes)CPU with liquid cooled heatsinkFull wide, 2-node compute tray6U Chassis, 6 bays (12 nodes/chassis)Manifolds deliver water directly to nodesWater circulated through cooling tubes for component level coolingIntel E v3 CPUs16x DDR4 DIMM slotsInfiniBand FDR support (ML2 or PCIe)6x 900W or 1300W PSUNo fans except PSUsDrip sensor / Error LEDsCooling tubesx16 DIMMsSimple architecture1 GbE ports1 GbE portsPower, LEDsDual-port ML2 (IB/Ethernet)Labeling tagPCI slot for Connect IBSystem infrastructuren1200 WCT Enclosuren1200 WCT ManifoldIBM NeXtScale System with Water Cool Technology, a new addition to the System x family, uses an innovative direct water-cooling design to more efficiently cool system components such as processors, memory and I/O cards. Instead of using fans, water is delivered directly to the server, and circulated throughout the system through cooling tubes, supporting water inlet temperatures up to 45 degrees Celsius.Based on the same core design as the air cool version, it includes 3 new components---a new compute tray, a new enclosure, and a manifold.The nx360 M5 WCT compute tray is a full-wide, 1U tray that holds 2 compute nodes. Based on the same planar technology as air cool, these nodes also support the E v3 processors, 16x slots of DDR4 memory, and Infiniband FDR support. However, rather than using fans, water is circulated through cooling tubes within the server to cool the processor, memory DIMMs, IO, and other heat producing components.The n1200 WCT Enclosure is a 6U chassis containing 6 full-wide bays, each holding one of the full-wide compute trays. This allows 12 compute servers per 6U chassis, same as we have for air cool. It also still includes the 6 power supplies, but no fans are required at all, making for a quieter and lower power environment.The WCT Manifold is what delivers the water to each of the nodes from the CDU. Each manifold section attaches to a chassis and connects directly to the water inlet and outlet connectors for each compute node in order to safely and reliably deliver water to an from each server.While other vendors are starting to introduce Water Cooled options into the market, IBM was the first to provide this type of technology starting with iDataPlex Direct Water Cooling. And we are now are further improving this technology for NeXtScale.6 full wide bays12 compute nodes14
15 NeXtScale – Key Messages FLEXIBLESIMPLEEven a small cluster can change the outcomeStart at any size and grow as you wantEfficient at any scale with choice of air or water coolingMaximum impact/$Optimized stacks for performance, acceleration, and cloud computingSingle Architecture with Native ExpansionBuilt on Open StandardsOptimized for your data center today and tomorrowChannel and box ship capableOne part number unlocks IBM’s service and supportFlexible Storage and Energy ManagementThe back is now the front—simplify management and deploymentGet in production faster with Intelligent ClusterOptimized shared infrastructure without compromising performance“Essentials only” designThese are the 3 key messages for NeXtScale and were also the 3 guiding design principles for the product to help customer solve problems faster, reduce time and cost, and gain competitive advantage. These are Scale, Flexible, and Simple.The NeXtScale system scales extremely well. It allows you to start at any size you want and provides outstanding performance and efficiency at whatever size you choose. As you scale, you get results faster, insights quicker, and maximum impact per dollar.NeXtScale is also very Flexible. It provides many choices on how to build and configure solutions tailored to customer needs. For example, you have a choice of compute, storage, and acceleration all within the same dense chassis. With the M5 version, you can now choose simple swap or hot-swap drives; PCI or ML2 for IO; or air or water for cooling. You also have the choice of how you want to receive your IT, whether in boxes or fully configured.And NeXtScale is Simple. Everything you need to access is in the front, so management and deployment is very easy. With Intelligent Cluster, you can simplify deployment by getting into production faster with less required resources. And NeXtScale has an “essentials only” design with shared infrastructure that saves unnecessary cost without compromising performance.
16 NeXtScale – Key Messages Even a small cluster can change the outcomeStart at any size and grow as you wantEfficient at any scale with choice of air or water coolingMaximum impact/$Optimized stacks for performance, acceleration, and cloud computing
17 Scale: The Power of Scale Delivers Benefits at Any Size Even a small cluster can change the outcomeMake better decisions by running larger, more sophisticated modelsSpot Trends Fasterand more effectively by reducing total time to resultsManage Risk Betterby increasing accuracy and visibility of models and datasetsWorkstation Node(s)Let’s talk about the power of Scale. Scale out computing refers to increasing performance by adding more systems or resources. It enables organizations to start small and scale their systems as needed. Even a small cluster can make a significant difference in achieving faster results compared to attempting same work on a single workstation.Speeding time to results is only half of this story. As you can see from this example, what used to take days or weeks on workstations now runs in hours or minutes thanks to scale - unlocking real time information that was not previously available. This allows us to make better decisions, spot trends faster and manage risk better.Scaling out to larger clusters makes even greater achievements possible, and NeXtScale does this extremely well.Data points from a study by IBM called Highly Productive, Scalable Actuarial Modeling: Delivering High Performance for Insurance Computations Using Milliman’s MG-ALFA® . It looked at migration from independent islands of compute to smart shared system/cluster records took 14 days on one single core work station. Compared to 14 hours on a 14 node BladeCenter based cluster. 1 million records that took 7.5 days on 600 single core work stations could now run on only 150 BladeCenter nodes installed in 3 racks.Game Changing ResultsLife Insurance Actuarial workbook1700 records that took 14 hours on a single workstation now takes 2.5 minutes on small cluster1 million records that took 7.5 days on 600 workstations now takes 2 hours on a 3 rack cluster with only 150 nodes
18 Departmental Solutions Containers – ‘NeXtPods’ Scale: Start at Any Size. Grow in any Increment.Single nodesand ChassisGrowing node by node?Available direct from IBMOptimized for availability through our partnersInstall the chassis today, grow into it tomorrowConfiguredracks or chassisWant to speed how quickly you can grow?Shipped fully assembledClient driven, Choice Optimized.“Starter Packs” are Appliance easy. CTO flexibleComplete Clustersand ContainersGrowing by leaps and bounds?NeXtScale can arrive ready to power on - ‘personality’ appliedRacks at a time or complete infrastructure ready containersThe key point here is that you can acquire the power of NeXtScale how ever you want.Do you prefer to receive it in single nodes and empty chassis? We are set up for that. Given the chassis is so light and relatively so low cost we believe we will see many clients install the chassis into the rack to ready them for future growth – simplifying and speeding additions to their IT infrastructure.Alternatively, we can ship fully assembled racks or chassis with your configuration in place. We can also help you get started with our intuitive and flexible starter kits. These starter kits are very easy and can be purchased quickly, along with the kind of flexibility you expect from a configurable solution.Now for clients that want to add IT at the rack level or even at a container level – IBM NeXtScale is also set up for that. Racks can be fully configured, cabled, labeled, and programmed before we ship. With the help of IBM GTS we can take configured racks and assemble them into complete containerized solutions with power, cooling, and infrastructure delivered ready to go.So Scale is for everyone. Any size -- from a handful of nodes or a complete mini data center. Your choice.Chassis orDepartmental SolutionsRack(s)Containers – ‘NeXtPods’Single Nodes
19 40% 10% 85% Scale: Achieve extreme scale with ultimate efficiency NeXtScale System with Water Cool Technology40%More Energy EfficientData Center1Requires no auxiliary cooling³No chillers required due to warm water cooling (up to 45 C)310%More Power Efficient Server2No Fans required for compute elementsSmall power supply fans onlyLower operational costs, and quieterNeXtScale System with Water Cool Technology drives extreme efficiency into the datacenter, making for a greener environment.Water Cool Technology enables up to 40% greater energy efficiency in the data center, and 10% greater power efficiency at the server level compared to air-cool solutions. This is largely due to the absence of fans for the compute nodes, which not only saves power but also makes for a much quieter environment.Over 85% of the heat in the system is recovered by water cooling and can then be re-used for other purposes such as heating facilities and buildings.There is also a significant advantage in that water inlet temperatures up to 45 degrees Celsius are supported. This makes expensive water chillers unnecessary, saving significant capital expense.The operational cost savings associated with Water Cool Technology result in quick payback of initial investment and continued savings for lower Total Cost of Ownership. This is particularly essential in geographies with high electricity costs or high cost of floor spaceGoing beyond the efficiency benefits, Water Cool Technology improves performance as well. It enables processors to run continuously in Turbo mode, allowing them to run at higher frequencies..85%Heat Recovery by WaterRe-use warm water to heat other facilitiesRun processors at higher frequencies (Turbo mode)Based on comparisons between air-cooled IBM iDataPlex M4 server and water-cooled iDataPlex M4 serversLRZ, a client in Germany DC numbersGeography dependentWater Cool Technology19
20 Scale: Maximum impact per $. Per ft2. Per rack. Race Car Design – performance and cost point ahead of features / functionsMaximize the capability of your data center floor with dense and essential ITTop bin E v3 processorsFast memory running at 2133MhzChoice of SATA, SAS, or SSD on boardOpen ecosystem of high speed IO interconnectsMore cores per floor tileEasy front access serviceabilityChoice of rack infrastructureLight weight + high performance can reduce floor loadingProcessing40%One nx360 with SSDs delivers same IO perf as2XLess weight per system550%50%355Hard Disks3More FLOPs per cycle than a Xeon E v27More High Frequency Cores1Intel Xeon E-5 26xx v2 versus current Intel Xeon %-5 26xx v1Memory DIMMs running at 1866Mhz versus 1600Mhz in current platformsnx360 M4 includes 4 x 1.8” SSDs run which run faster than traditional 2.5” SAS HDDs. SSDs deliver peak performance of 40,000 IOPS versus HDDs at peak of 450 IOPS. 4 x 50,000 = 200,000 IOPS which is equivalent to ” HDDsBased on A 1000 node cluster with 2 x86 sockets, 8 core 2.7 GHz, consumes about 340 KW. No hardware changes – optimization of power useage at rest and at peak utilization.Europe (0.15€ per KWh) = 441K€ per yearUS (0.10$ per KWh) = US$ 295K per yearAsia (0.20$ per KWh) = US$ 590K per yearAs compared to a a traditional 1U server like HP DL360 Gen 8 Weight (maximum config) kg (39lbs) nominal configuration 13Kgs for comparison. Versus NeXtScale nx360 M4 (6 kg) + apportioned chassis (39kg/12 or 3.25kg) = 9.25kg maximum v 13= 40% more weight using 1Us versus NeXtScale nx36 M4for similar processing power.Based on the comparison between an equivalently configured NeXtScale system with Intel Xeon E v3 over the June 2014 #99 entry on the Top500 ( which uses Intel Xeon E v2.With AVX2 capabilities in the Xeon E v3. Ticks per cycle goes to 16With AVX2 turned on, a system with a E has a Rpeak value of 1.1 TFLOPDesigning a similar configured solution as the #99 entry on the June Top500 list ( less cores of Intel Xeon E v3 are required to achieve Teraflops of computational capabilities over the E V2.Comparing the E v2 and the E v3, the v3 will achieve 2X as many flops per 2 socket server with the increase in cores from 12 in the v2 to 14 in the v3 as well as the doubling of flopes per cycle from 8 in the v2 to 16 in the v3Less Servers required 9CUSTOMER BENEFITSMemory RunsFASTER2Power Savigns with Platform LSF Energy Aware41.1 TFLOP2.7X14%80%LESS racks per solution6Performance achieved per server15%Increase in Flops/Watt102020
21 Platform Symphony Family Scale: Platform Computing – complete, powerful, fully-supportedApplicationsSimulation / ModelingBig Data / HadoopAnalyticsSocial &MobilePlatform LSF FamilyBatch, MPI workloads with process mgmt, monitoring, analytics, portal, license mgtPlatform HPCSimplified, integrated HPC mgmt for batch, MPI workloads integrated with systemsPlatform Symphony FamilyHigh throughput, near ‘real time’ parallel compute and Big Data / MapReduce workloadsWorkload and Resource ManagementData ManagementElastic Storage based on General Parallel File System (GPFS)High performance software defined storageThere are four major product families in the IBM Platform computing portfolio.The first is Platform LSF which is our historical flagship product that has a large install base. It is a scalable, comprehensive workload management suite for mission-critical heterogeneous environments. I just want to emphasize this concept of a suite because it isn’t just one product and this is a big differentiator in the marketplace. It actually is a comprehensive offering of everything that you need for the workload management as well as things like monitoring and analytics. It really has an unmatched experience with a dominant market share in major industries. It has a powerful multi-policy scheduling engine that I talked about before. Then it’s also used in mission critical accounts. It’s part of their everyday mainstream work processes and it’s known in the industry for that reliability. That’s a key differentiator for us.Second, it is platform HPC which is a simplified, integrated, HPC management software offering that’s typically bundled with systems. Where Platform LSF has typically been targeted into larger accounts, Platform HPC is really ready there for a cluster or maybe a departmental offering that’s just getting started. You can then scale up if you need to and upgrade into the Platform LSF family because it’s based on the same LSF engine and software but Platform HPC really is an all-in-one integrated solution with a market-leading, very easy to use web interface. It can be used in even the smallest of clusters.Third is the Platform Symphony Family which is really more an analytics infrastructure and it’s a high throughput, low latency compute and data intensive analytics suite of offerings. In all our offerings we actually have gone through the process of making them into IBM additions. At the advanced position for Platform Symphony, it also includes our Big Data offering around Map Reduce but Platform Symphony really does have a leading experience with over 50% of the major investment banks using us as customers. What that does is it shows the scalability and reliability and it’s transferrable to many other industries. Another great example of that is how the Big Data problem is represented in areas such as government intelligence. But it has a high scalability and higher performance than other solutions due to this fast low latency processing capability. It’s used as a grid infrastructure in many of our customers in many at the enterprise level as well as global. You can use it in multiple geographies.Finally, there’s the Platform Cluster Manager family which is all about the provisioning and management of clusters. It scales from everything from our standard edition which is all about simplifying the administration and management of a multiple cluster type of environment to the advanced edition which really supports the concepts of a much more dynamic infrastructure all the way to HPC clouds where people can create clusters on demand through an easy to use self-service portal as well as supporting the concept of multi-tenant HPC clouds so we have customers that are more service providers and they provide unique cluster environments to each of their end customers or groups.LSF – runs and manages batch workloads of varied complexityProvisions and managesPlatform LSF, Platform Symphony and third party workload managers including Hadoop clustersSymphonyService-oriented, API driven workloadsExtreme throughput / low latencyAgile resource sharing, multi-tenancyBig Data, Hadoop requirementsPlatform Cluster Manager – Builds and manages clusters and HPC CloudsProvisions and manages Platform LSF, Platform Symphony and third party workload managers including Hadoop clustersPlatform Cluster Manager FamilyProvision and manage Single Cluster (Standard) to Dynamic Clouds (Advanced)Infrastructure ManagementComputeStorageNetworkVirtual, Physical, Desktop, Server, CloudHeterogeneous Resources
22 Scale: Performance Optimized Stack – From Hardware Up ApplicationsSimulation / ModelingAnalyticsRiskAnalysisWorkload and Resource MgmtPlatform LSFPlatform HPCAdaptive ComputingMoabMaui/TorqueGlobal/Parallel FilesystemGPFSLustreNFSApplication LibrariesIntel® Cluster StudioOpenMPIMVAPICH2Platform MPIOperating SystemsThere are four major product families in the IBM Platform computing portfolio.The first is Platform LSF which is our historical flagship product that has a large install base. It is a scalable, comprehensive workload management suite for mission- critical heterogeneous environments. I just want to emphasize this concept of a suite because it isn’t just one product and this is a big differentiator in the marketplace. It actually is a comprehensive offering of everything that you need for the workload management as well as things like monitoring and analytics. It really has an unmatched experience with a dominant market share in major industries. It has a powerful multi-policy scheduling engine that I talked about before. Then it’s also used in mission critical accounts. It’s part of their everyday mainstream work processes and it’s known in the industry for that reliability. That’s a key differentiator for us.Second, it is platform HPC which is a simplified, integrated, HPC management software offering that’s typically bundled with systems. Where Platform LSF has typically been targeted into larger accounts, Platform HPC is really ready there for a cluster or maybe a departmental offering that’s just getting started. You can then scale up if you need to and upgrade into the Platform LSF family because it’s based on the same LSF engine and software but Platform HPC really is an all-in-one integrated solution with a market-leading, very easy to use web interface. It can be used in even the smallest of clusters.Third is the Platform Symphony Family which is really more an analytics infrastructure and it’s a high throughput, low latency compute and data intensive analytics suite of offerings. In all our offerings we actually have gone through the process of making them into IBM additions. At the advanced position for Platform Symphony, it also includes our Big Data offering around Map Reduce but Platform Symphony really does have a leading experience with over 50% of the major investment banks using us as customers. What that does is it shows the scalability and reliability and it’s transferrable to many other industries. Another great example of that is how the Big Data problem is represented in areas such as government intelligence. But it has a high scalability and higher performance than other solutions due to this fast low latency processing capability. It’s used as a grid infrastructure in many of our customers in many at the enterprise level as well as global. You can use it in multiple geographies.Finally, there’s the Platform Cluster Manager family which is all about the provisioning and management of clusters. It scales from everything from our standard edition which is all about simplifying the administration and management of a multiple cluster type of environment to the advanced edition which really supports the concepts of a much more dynamic infrastructure all the way to HPC clouds where people can create clusters on demand through an easy to use self-service portal as well as supporting the concept of multi-tenant HPC clouds so we have customers that are more service providers and they provide unique cluster environments to each of their end customers or groups.LSF – runs and manages batch workloads of varied complexityProvisions and managesPlatform LSF, Platform Symphony and third party workload managers including Hadoop clustersSymphonyService-oriented, API driven workloadsExtreme throughput / low latencyAgile resource sharing, multi-tenancyBig Data, Hadoop requirementsPlatform Cluster Manager – Builds and manages clusters and HPC CloudsProvisions and manages Platform LSF, Platform Symphony and third party workload managers including Hadoop clustersRHELSuSEWindowsUbuntuBare Metal Management/ Provisioning/MonitoringExtreme Cluster Administration Toollkit (xCAT)Bare metal management/provisioningComputeStorageNetworkVirtual, Physical, Desktop, Server, Cloud
23 Scale: GPGPU Accelerator Optimized Stack – From Hardware Up ApplicationsLife SciencesOil and GasFinanceMolecularDynamicsWorkload and Resource MgmtPlatform LSFPlatform HPCAdaptive ComputingMoabMaui/TorqueGlobal/Parallel FilesystemGPFSLustreNFSApplication LibrariesIntel® Cluster StudioCUDAOpenCLOpenGLOperating SystemsRHELSuSEWindowsBare Metal Mgmt and ProvisioningExtreme Cluster Administration Toollkit (xCAT)Bare metal management/provisioningComputeStorageNetworkVirtual, Physical, Desktop, Server, Cloud
24 Scale: Cloud Compute Optimized Stack – From Hardware Up ApplicationPublic Cloud ProvidersPrivate Cloud MSP/CSPOpenStack CEFor customers looking to deploy complete Open Source Solutions with little to no Enterprise featuresIBM Cloud Manager with OpenStack .Optimized with automation, security, resource sharing and monitoring over OpenStack CESmartCloud OrchestratorCustomers who require optimized utilization, multi-tenacy and enahanced securityCloud Management SolutionsCommon Cloud Management PlatformProvides Server, Storage and Network Integration, access to OpenStack APIsCommon Cloud StackKVM, VMWare, Xen, Hyper-VHypervisorsThe cloud stack is much like the HPC stack – its founded on solid hardware and open standards.Choice is a key for us when it comes to cloud – no one solution is right so we are trying to allow client choice. NeXtScale’s goal is to blend into client environments and best practices without any changes. No changes to data center and no changes to management, protocols, and tools.Bare Metal Mgmt and OnboardingPuppet, xCAT, Chef, SmartCloud ProvisioningComputeStorageNetworkVirtual, Physical, Desktop, Server, Cloud
25 NeXtScale – Key Messages FLEXIBLESingle Architecture with Native ExpansionBuilt on Open StandardsOptimized for your data center today and tomorrowChannel and box ship capableOne part number unlocks IBM’s service and supportFlexible Storage and Energy Management
26 GPU Riser Card + GPU/Phi Flexible: Native eXpansion – Adding Value, not ComplexityBase node delivers robust and dense raw compute capabilitiesNeXtScale’s Native Expansion allows seamless upgrades of the base to add common functionalitiesAll on a single architecture, with no need for exotic connectors or unique componentsStorageGraphics Acceleration / Co-processingnx360 M5nx360 M5++Storage NeX*PCI NeX++Native Expansion means that we can add function and capabilities seamlessly to the basic node. No need for exotic connectors or unique components or high speed back or mid planes.To add storage, simply start with the base compute node, and then add a Storage NeX tray with standard RAID card, SAS cable and hard drives.For acceleration, start with the base compute node, then add on a PCI NeX tray with 2U PCI riser and a passively cooled GPU from NVIDIA or Xeon Phi from Intel and you have a powerful acceleration solution for HPC, virtual desktop, or remote graphics.RAID Card +SAS Cable + HDDsGPU Riser Card + GPU/PhiIBM NeXtScale nx360 M5with Accelerator NeXIBM NeXtScale nx360 M5with Storage NeX* M5 support to be available with 12Gb version at Refresh 1
27 Flexible: Designed on Open Standards = Seamless Adoption IBM ToolsCenterConsolidated, integrated suite of management toolsPowerful bootable media creator, FW updatingOpenStack ReadyDeploy OpenStack with Chef or PuppetMirantis Fuel, SuSE Cloud, IBM SmartCloudUEFi and iMMStandards-based hardware that combines diagnostic and remote control; No embedded SWRicher management experience and future-readyIPMI 2.0 CompliantUse any IPMI compliant mgt. software – Puppet, Avocent, IBM Director, iAMT, xCAT, etc.OpenIPMI, ipmitool, ipmiutils, FreeIPMI compatibleSystem MonitoringFriendly with open source tools like Ganglia, Nagios, Zenoss, etc.Use with any RHEL/SuSE (and clones) or Windows based tools.xCATProvides remote & unattended methods to assist with Deploying, Updating, Configuring, and DiagnosingNeXtScale is also very flexible with support of Open Standards. A client needs more than hardware to use IT. We have designed NeXtScale to support an open stack of industry standard tools to allow clients that have existing protocols and tools to migrate easily to using NeXtScale.We do include things like our Integrated Management Module with its unique value delivery but we also support standard IPMI for use in a mixed environment.Whether you are running an HPC application that is optimized with Platform Computing or run on top of an industry standard open source tool NeXtScale makes it easy. Same for Cloud – you can use OpenStack and other industry standard, open stacks, or IBM Smart Cloud – both are viable and bring value.Platform ComputingWorkloads managed seamlessly with Platform LSFDeploy clusters easily with Platform HPCSDN FriendlyNetworking direct to system; no integrated proprietary switchingSupport for 1/10/40Gb, InfiniBand, FCoE, and VFAs
28 Flexible: Optimized with your DataCenter in Mind – today and tomorrow The Challenge:Package more into the data center without breaking the clients’ standardsLower Power Costs all day long – peak usage times and slow timesMaximize energy efficiency in data centerThe Solution: NeXtScale + IBM InnovationEssentials only design allows more servers to fit into the data centerDesigned to consume less power and to lower energy costs at peak and at idleSmart power management can drive down power needs when systems are at idleChoice of air and water cooled servers in either IBM racks or existing racks40% energy efficiency for water cooled solutionsReduce powercost duringslow timesChoiceair or watercooled racksChoiceair or watercooled racksChoiceair or watercooled racksLower energyusage duringthe peakLower energyusage duringthe peakLower energyusage duringthe peakLower energyusage duringthe peakOur RackORyours2X MOREServers perfloor tileNeXtScale is easy to install in existing data centers. It also is designed to maximize green field data centers.Power consumption at peak loading is reduced by several methods: (1) we have selected the most energy efficient parts (voltage regulation, energy star PSU, 1.35V memory options). (2) We then design NScale with only the essentials so there are no extra parts sitting in the system consuming power. (3) We enable and allow clients to control some very power sleep state setting in the node that can trim processor power consumption by 40% (need to verify this with Luigi) when not at peak performance. (4) We go beyond the node by allowing chassis level power capping for powerful energy management. Outside the hardware we offer energy aware scheduling software as part of Platform Computing LSF – this software allows jobs to be ‘tagged’ with energy profiles. The scheduler can then select how and where the job is run to result in the lowest possible power consumption.No matter what you are trying to do: pack more performance into an aging data center, reduce power costs when the systems are not being highly utilized, or to reduce total overall energy usage – NScale was designed to deliver more impact per KW.40% MoreEnergyEfficient withWater Cool40% MoreEnergyEfficient withWater Cool40% MoreEnergyEfficient withWater Cool40% MoreEnergyEfficient withWater CoolLowestOperationalcosts withWater CoolLowestOperationalcosts withWater Cool
29 Faster Time from arrival to production readiness Flexible: How Do You Want Your IT to Arrive?NeXtScale can ship fully configured, ready to power onFully racked and cabledLabeling with user supplied namingPre programmed IMMs and addressesBurn in testing before shipment at no added costPrefer to receive systems in boxes – no problem$Customer BenefitsAnother area of flexibility is how you want your IT to arrive. With NeXtScale, there are many choices available. Some clients want everything in piece parts so they can mix and match to build what they want. Others want systems that they have tested and approved to show up all configured to their liking. Still others would like for complete solutions of racks to arrive ready to plug in. With NeXtScale, the choice is truly yours.Shipping servers in boxes is not to complex so lets talk about the value of shipping full solutions. When IBM builds a solution for our clients we don’t just put things in a rack, we cable it, labeling each cable and each node. We program the IMMs (BMC) and can even provide a “burn in test”. Before we ship the rack, we not only fully assemble it but we run a test called Linpack. This test stresses every component in the solution for about 2 hours. This early burn in testing helps us find any parts that are not installed properly or not functioning up to standard. We can provide this test report to the client as requested to show the results of individual racks to make sure they are all performing as expected. Most failures of commodity components happen early in life – our aim is to find these at IBM and resolve them before our clients have to deal with them.So do you want to cut the time from arrival to production by 75%? We’ll have it arrive ready to go. Want to treat your complex solution as if it were one machine? We can do that because your service and support all comes from 1 special part number that is on the rack. How much does this cost? Either no cost or very little cost over box shipments – how you ask? Think of a single rack’s worth of IT – it all comes in boxes, with plastic, styrofoam, and papers. We save all the cost of these items, and put it towards assembly and testing.IBM Intelligent ClusterNumber of part numbers needed for the entire solution support – no matter the brand of componentSAVE105 lbs of cardboard154.6 ft3 of styrofoam288 linear feet of wood21,730 less paper inserts75%1Faster Time from arrival to production readiness1Per Rack29
30 Flexible: Confidence it is high quality and functioning upon arrival Comprehensive list of interoperability-proven components for building out solutionsIBM serversIBM switching, 3rd party switchingIBM storage, 3rd party storageIBM software, 3rd party softwareCountless cables, cards, and add insBest recipe approach yields confident interoperabilityEach rack is built from over 9000 individual partsManufacturing LINPACK test provides lengthy burn-in on all parts in the solutionConfidence the parts are installed and functioning properlyAny failing parts are replaced prior to shipmentReduces early life part fall out for our clientsConsistent performance and quality are confirmed before shipmentIs this one rackORIs it >9,000 parts?IBM is not the only company that provides ingredients or parts that our clients want in these solutions today – think about your most recent purchase – there was Intel parts, Cisco or Mellanox parts? Some ones storage, and the list goes on and one. Through the Intelligent Cluster we assemble a large selection of IBM and non IBM parts that we know work together properly because we have tested them. We then deliver solutions built with these approved parts as one part number with a single warranty that covers everything in the solution. No matter the function or brand – it all carries the same support.List out the total number of functioning components in a typical rack – procs, dimms, hdds, cards, cables, boards, etc…Server boards – dozens of independent componentsPCI cards/Mezz – 144Procs - 144DIMMs – 576HDDs – 144Cables – 360Fans - 60PSUs – 18External switches and PSUsPDUs100s of server level componentsApprox 9000 parts that are stressedWhen you grow rack by rack you want assurances that what you add to your infrastructure is running right. That the new stuff is inline with what your other racks are doing. Our testing can showcase the predictability of the performance the rack yields and that is valuable. Predictability can be a beautiful thing.It’s both
31 Flexible: Global Services & Support IBM is a recognized leader in services & supportSpeed + Quality57call centers worldwide with regional and localized language support23,000IT support specialists worldwide who know technology585parts centers with 13 million IBM and non-IBM partsPrevent downtime with proactive, first-rate serviceResolve outages faster if they do occur to protect your brandOptimize IT and end-user productivity—and revenue—to enhance business resultsProtect your brand reputation and keep your customer baseSimply support to save time, resources, and cost94%first call hardware success rateA combined total of6.8Mhardware and software service requests114hardware and software development laboratoriesRated#1in Technical SupportParts are delivered within 4 hours for99%of US customers75%of software calls resolved by first point of contactNow, let’s take a closer look at IBM’s global client support infrastructure for maintenance and technical support services. No matter where you need help, we can be there with the services you need to help keep your IT infrastructure available and security-rich.There is strength in numbers. IBM has nearly 400,000 IBM employees worldwide (197,000 of those are IBM Global Services employees.) We have a support presence in 209 countries/nations, speaking ~127 languages. And as you can see, we have both broad, global reach and localized support—both remote support (by phone or electronic communications) and onsite support when you need it. Plus, we have a state-of-the-art, proven set of communications, problem management and logistics processes and tools linking our teams and keeping things running efficiently and smoothly to get the right people, parts and information to you when and where you need them.Let’s begin with a look at our worldwide network of remote technical support centers and technical experts:Today we have 70 support centers around the world. Some are call centers, some are “Level 1” support centers with technical specialists who speak your local language, and others are staffed by “Level 2” technical specialists with deeper product knowledge and higher-level technical skills..I want to highlight the fact that we also have a global, “last level” of support available to help solve even the most challenging problems and answer tough questions. The IBM support infrastructure includes access to the people who know technology like no one else—the people involved in its research and development.We have more than 3,000 scientists and engineers in nine global IBM research laboratories (shown on this map by orange dots):Almaden (San Jose, CA, USA)Austin (Austin, TX, USA)China (Beijing)Haifa (Israel)India (Delhi)Tokyo (Yamato, Japan)Watson (Yorktown Heights, NY, USA)Zurich (Rueschlikon, Switzerland)For more information, visit the IBM Web site at:And, we have 19 ‘major’ hardware and software product development laboratories (shown here by blue dots on the map) and access to another 97 product labs to ensure fast response to the most complex support issues.Lenovo’s Service Commitment“After the deal closes, IBM will continue to provide maintenance delivery on Lenovo’s behalf for an extended period pursuant to the terms of a five-year maintenance service agreement with IBM. Customers who originated contracts with IBM should not see a change in their maintenance support for the duration of the customer’s contract.”Source:31
32 Virtualizated Storage Low-Cost Object Storage Flexible: NeXtScale Mini-SAS Storage ExpansionNatively expand beyond the node with the onboard mini-SAS portSimply choose available storage controllersConnect the nx360 node to a JBOD or storage controller of your choice.NFSLow Cost Block StorageDense Object StorageHadoopAnalyticsHPCAnalyticsVirtualizated StorageSecure EncryptionCompressionBlock StorageLow-Cost Object StorageHadoop234V3700 JBOD1V3700++=V7000nx360 M5mini-SAS portRAID controllermini-SAS cableDCS3700 JBODIdeal for dense storage requirementsClick name once for solution to appearClick 2nd time to make disappear and then to select different choice
33 Flexible: Dense Storage Customer NeXtScale Chassis and nodes24 x 2 Socket E v3 nodes per chassisDual port 10G Ethernet2x 1G Management PortSAS HBA w/6Gb External ConnectorStorage JBODs60 Hot Swap Drives in 4U6 JBODs/Rack4G SAS NL DisksPure JBOD, No ZoningNetworking1 x 64 Port 10G Ethernet (optionally 2 switches for redundancy)Uplinks Req’d2 x 48 Port 1G Ethernet SwitchesManagement (1x Dedicated + 1x Shared port)Connects to Nodes, JBODs, Chassis FPCs, and PDUs1.44 Petabytes of Raw storage per rack!33
34 Control beyond the server Powerful Energy Management Flexible: Power efficiency designed into HW, SW and managementEfficient HardwareControl beyond the serverPowerful Energy Management80 Plus Platinum power supplies at over 94% efficiency – saves 3-10%Extreme efficiency voltage regulation – saves 2-3%Larger, more efficient heat sinks require less air – saves 1-2%Smart sharing of fans and power supplies reduce power consumption – saves 2-3%Underutilized power supplies can be placed into a low power standby mode.Energy efficient turbo modeLess parts = less powerEnergy Star Version 2(1)Choice of air or water coolingNo fans or auxiliary cooling required for water cooled version, saving power costPre-set operating modes - tune for efficiency, performance, or minimum powerChassis level power metering and controlPower optimally designed for 1-phase or 3-phase power feedsOptional intelligent and highly efficient PDUs for monitoring and controlPowerful sleep state(2) control reduces power and latencyxCAT APIs allow for embedding HW control into management applicationsLSF Energy Aware features allows for energy tuning on the flyPlatform software can target low-bin CPU applications to lower power on CPUs in mixed environmentsPlatform Cluster Manager Adv. Edition can completely shut off nodes that are not in useOpen Source monitoring tool friendly allows of utilization reportingAutonomous power management for various subsystems within each node.Three pillars of power savings. Smart HW – HW that is designed to consume less power Platinum is an industry standard for power supply design that assures that the PSU operates at specific efficiency levels. On top f that we pick power delivery components like the voltage regulation that delivers the power more efficiently. Just like BladeCenter and Flex we selected a design with shared power and cooling – this topology reduces the total number of parts needed for power/cooling solution – saving money in part cost and reducing the number of PSUs and fans therefore reducing power draw.S3 allows systems to come back into full production from low power state much quicker than a tranditional power on. In fact cold boot normally takes about 270 seconds, with S3 that happens in only 45 seconds. When you know a system will not be used due to time of day or state of job flow – why not send it into a very low power state to save power and when needed bring it back on line quickly.(1) Pending announcement of product (2) On select IBM configurations
35 Flexible: Energy Aware Scheduling Optimize Power Consumption with Platform LSF®On Idle NodesOn Active NodesPolicy Driven Power SavingSuspend the node to the S3 state (saves~60W)**Idle for a configurable period of time.Policy windows (i.e., 10:00 PM – 7 AM)Site customizable to use other suspension methodsPower Saving Aware SchedulingSchedule jobs to use idle nodes first (Power saved nodes as last resort)Aware of job request and wake up nodes precisely on demandSafe period before running job on resumed nodesManual managementSuspend, resume, historyPower Saving Aware SchedulingAbility to set the node/core frequency for a specific job / application / userSet thresholds based on environmental factors – such as node temperatureEnergy Saving Policies**Minimize energy to solutionMinimize time to solutionby intelligently controlling CPU frequenciesCollect the power usage for an application (AC and DC)**Make Intelligent PredictionsPerformance, power consumption and runtime of applications at different frequencies**** Only available on IBM NeXtScale and iDataPlex
36 NeXtScale – Key Messages SIMPLEThe back is now the front—simplify management and deploymentGet in production faster with Intelligent ClusterOptimized shared infrastructure without compromising performance“Essentials only” design
37 Simple: Making management and deployment simple CompetitionNeXtScaleStay in front of the rack and see things betterYou don’t have to be in the darkIt’s so dark in hereReduce service errors when maintaining systemsKnow what cable you are pullingWhich cable do I pull?Quick access to serversOne way we make NeXtScale simple is by making everything front access.Why front access?Well, it’s hard to see in the back of the rack where lighting is very dim. That makes it very difficult to locate a dense server in a row of dense racks – having everything in the front simplifies and reduces chances of mis-cabling or pulling the wrong server.Everything is right there in front of you – power button, cabling, alerts, node naming/tagging – very easy.Also, having the cables not clogging up the back of the rack is good for air flow and thus good for energy efficiency – the harder the fans work to move air through server the more power they consume.People that work in the data center will tell you that the front of the rack is a much more enjoyable environment to spend time in. It can easily be 30F cooler in front than in back. People can’t stay in the rear of the rack that long without getting uncomfortable.Also, the front of rack is less noisy than the rear due to fan noise.NeXtScale servers and chassis will not need any tools to service the hardware. With push of a button, a top cover can be removed, and servicing components such as hard drives, memory, and IO cards are as easy as simply pushing and pressing on levers to remove and replace components.This all makes for very simple management and deployment.Tool-less design speeds problem resolutionRemove power in rear(from the right system) before pulling system out from the frontAdd/remove/power servers without touching the power cables65-80ºF>100 ºF!!!Which aisle do you want to work in?Cold aisleHot aisle37
38 Simple: In Production Faster - ENI Client Example Intelligent Cluster significantly reduces setup time resulting in getting clients’ into production by at least 75%1 faster than non-Intelligent Cluster offeringsSolution Overview1500 server nodes in 36 racks3000 NVIDIA K20x GPGPU acceleratorsIFDR InfiniBand, Ethernet, GPFSEnclosed in cold-aisle containment cage#11 on June 2014 Top500 listDelivered fully integrated to the client’s centerHW inside delivery and installation included at no additional costTOP500 Linpack run successfully 10 days after first rack arrivedAll servers pre-configured with customer VPD in manufacturingEntire solution delivered, supported as 1 part numberFull interoperability test and supportOne number to call for support regardless of componentIncludedInteroperability TestedYesHPL (Linpack) Stressed / Benchmarked in Mfg.IBM HW Break Fix SupportYes - All componentsInside Delivery, HW Install.Bare Metal Customization Available at no chargeResultProduction ReadyActual client example. When installed it was the 6 largest SuperComputer in the world, largest in Europe, and largest Intel TOP500 entry. Even today it still ranks in the TOP10. Imagine getting a solution of this size up and running this quickly. The 75% reduction in time is a comparison to the time it would have taken them to receive in boxed servers and components, install in racks, cable, program, and start up. With IBM and Intelligent Cluster the racks all arrived nearly ready to power on.1 comparison on install time for complete roll your own installation versus IBM Intelligent Cluster delivery
39 w/o Intelligent Cluster Simple: Save Time and Resources with Intelligent ClusterIBM fully integrates and tests your cluster, saving time and reducing complexityStepIntelligent Clusterw/o Intelligent ClusterMove Servers to DC15min40minInstall Rail Kits30minInstall Servers2hrCable EthernetCable IBRack to Rack1hrPower on Test10minProgram IMMsProgram TORCollect MAC & VPD30ProvisionHW VerificationTOTAL TIME:2-1/2 hr9-1/2 hrSAVE ~7 hrs per rack!Another facet of simplicity is the time and resources that can be saved with Intelligent Cluster. With Intelligent Cluster, IBM handles the integration and testing of your cluster saving significant setup time and complexity during installation---getting clients into production at least 75% faster. We already talked about many of the details of shipping full systems on a previous slide, which are handled through Intelligent Cluster. In this way, IBM handles many tasks like installing the rail kits, servers, cabling, conducting power on tests and required programming, saving clients an average of 7 hours per rack. This is an area of simplicity and savings that every customer will appreciate.
40 Simple: The Advantages of Shared Infrastructure without the Contention Shared Power Supplies and Fans90% reduction in fans175% reduction in PSUs1Each system acts as an independent 1U/2U serverIndividually managedIndividually serviced and swappable serversUse any Top of Rack (TOR) SwitchNo contention for resources with in the chassisDirect access to network and storage resourcesNo management contentionLight weight/Low Cost chassisSimple mid plane with no active componentsNo in chassis IO switchingNo left or right specific nodesHigh efficiency PSU and fansNo unique chassis management requiredIBM pioneered the idea of shared power and cooling in x86 with BladeCenter in That design allowed our engineers to pull fans and power supplies off the individual servers and share them in the chassis – reducing the total number of parts, reducing cost, and reducing power consumption. This was a great thing and we still love it today so NeXtScale shares these important assets. Where NeXtScale differs from BladeCenter is around integration of the rest of the critical components like systems management and networking. BladeCenter uses a very cool midplane and Advanced Management Module (AMM) to route IO from server to switch and for management of the entire chassis. For many clients this is perfect. NeXtScale went in a different direction – much like iDataPlex the chassis has no unique management and no IO flowing through it. Everything is done at top of rack. Top of rack essentially means that all cables are routed to an external switch (usually but not always in the top of the rack). TOR designs allow a great deal of flexibility in choosing brands and types of IO. But there is external cabling involved – which approach is best? The integrated blade/flex route or NextScales route – well that depends on the client, their skill level, existing tools, etc… Same idea with management – BladeCenter and Flex gives you a single pane of glass for consolidated management and control – NeXtScale is designed to be used in existing management tools and protocols. A different approach – the best thing is that one of these two approaches should appeal to nearly all our clients. BladeCenter and Flex and NeXtScale are not competitive products – they complement one another very nicely – our job is to make sure client see the full story and help them decide which is the best approach.1 typical 1U server with 8 fans and 2 PSUs
41 Simple: “Essentials Only” Design Only includes the essentialsTwo production 1Gb Intel NICs; dedicated or shared 1Gb for managementStandard PCI card supportFlexible ML2/Mezzanine for IO expansionPower, Basic LightPath, and KVM crash cart accessSimple ‘pull out’ asset tag for naming or RFIDNot painted black just left as silverClean, simple, and low costBlade like weight/size – rack server like individuality/controlNeXtScale delivers basic, performance centric IT“I can’t see my servers, don’t care what color they are”“I don’t use 24 DIMMs why pay for a system to hold them?”“I only need RAID mirror for OS don’t want extra HDD bays”“I only need a few basic PCI/IO options”This is a close up of the front of a NeXtScale nx360 M5. What’s here? All the essential items you need in a server – NICs, BMC access, Power, alerts, and IO access. Its even got a nifty pull out tab at the bottom of the node for labeling or asset tagging and a crash cart access port.You will notice this server is not black. Well most of these systems are not going to be out next to peoples desks, they will be locked away in the data center away from sight. Also going back to our race car design approach – how much more performance do we get by painting it black? None right, so we chose to not paint it and leave the corrosion resistant metal out there as proof this is a different kind of server. All that said you have to admit it still looks pretty cool doesn’t it?NeXtScale will not be for everyone. We took specific steps to take out function to improve efficiency or lower costs. It will be important to understand the work load to make sure that NeXtScale is the right choice. If not don’t worry the entire System x line up is waiting for you.NeXtScale nx360 M5
42 Introducing IBM NeXtScale System with Water Cool Technology 2014 LENOVO INTERNAL. All rights reserved.
43 3 Ways to Cool your Datacenter Air CooledRear Door Heat ExchangersDirect Water Cooled100% water cooledNo fans or moving parts in systemMost energy efficient datacenterMost power efficient serversLowest operational costQuieter due to no fansRun processors in turbo mode for max performanceWarm water cooling means no expensive chillers requiredGood for geographies with high electricity costStandard air flow with internal fansGood for lower kW densitiesLess energy efficientConsumes more power – higher OPEXTypically used with raised floors which adds cost and limits airflow out of tilesUnpredictable cooling. Hot spots in one area, freezing in anotherAir cool, supplemented with RDHX door on rackUses chilled waterWorks with all IBM servers and optionsRack becomes thermally transparent to data centerEnables extremely tight rack placementThere are 3 basic ways to cool a datacenter. This can be done with standard Air Cooling, or with Rear Door Heat Exchangers, or as Direct Water Cooled.The Air Cooled method utilizes standard air flow with internal fans. While allowing for the broadest selection of supported server functions, it is not as energy efficient as the water cooling options resulting in higher operational expense and unpredictable cooling in the datacenter.Rear Door Heat Exchangers can be added to air cool racks to provide supplemental heat removal using chilled water. This allows more efficiency and density within the datacenter.Finally, Direct Water Cooled systems use 100% water cooling directly to the server with no internal fans. It is the most energy efficient of the cooling options, meaning even lower operational cost in the datacenter, especially important in geographies with high electricity cost. Other benefits include a quieter datacenter environment due to no fans, and improved performance with optimal and continuous support of Turbo mode.NeXtScale now offers all 3 of these options for customers. I’ve already described what’s new in our air cooled server.Next, let’s take a look at the new water cool version that is being introduced into the NeXtScale family this year.43
44 More Power Efficient Server2 NeXtScale System with Water Cool Technology Achieve extreme scale with ultimate efficiency40%More Energy EfficientData Center1Requires no auxiliary cooling³No chillers required due to warm water cooling (up to 45 C)310%More Power Efficient Server2No Fans required for compute elementsSmall power supply fans onlyLower operational costs, and quieterNeXtScale System with Water Cool Technology drives extreme efficiency into the datacenter, making for a greener environment.Water Cool Technology enables up to 40% greater energy efficiency in the data center, and 10% greater power efficiency at the server level compared to air-cool solutions. This is largely due to the absence of fans for the compute nodes, which not only saves power but also makes for a much quieter environment.Over 85% of the heat in the system is recovered by water cooling and can then be re-used for other purposes such as heating facilities and buildings.There is also a significant advantage in that water inlet temperatures up to 45 degrees Celsius are supported. This makes expensive water chillers unnecessary, saving significant capital expense.The operational cost savings associated with Water Cool Technology result in quick payback of initial investment and continued savings for lower Total Cost of Ownership. This is particularly essential in geographies with high electricity costs or high cost of floor spaceGoing beyond the efficiency benefits, Water Cool Technology improves performance as well. It enables processors to run continuously in Turbo mode, allowing them to run at higher frequencies..85%Heat Recovery by WaterRe-use warm water to heat other facilitiesRun processors at higher frequencies (Turbo mode)Water Cool TechnologyBased on comparisons between air-cooled IBM iDataPlex M4 server and water-cooled iDataPlex M4 serversLRZ, a client in Germany DC numbersGeography dependent44
45 NeXtScale nx360 M5 WCT Dual Node Compute Tray Water Cool Compute Nodenx360 M5 WCT Compute Tray (2 nodes)2 compute nodes per full wide 1U trayWater circulated through cooling tubes for component level coolingDual socket Intel E v3 processors (up to 18C)16x DIMM slots (DDR4, 2133MHz))InfiniBand FDR support via choice of ConnectX-3 ML2 adapter or Connect IB PCIe adapterOnboard GbE NICsLabeling tag1 GbE portsPower, LEDsPCI slot for InfinibandDual-port ML2 (IB/Ethernet)Simple architectureSystem infrastructureCPU with liquid cooled heatsink16x DIMM slotsCooling tubesNeXtScale System with Water Cool Technology, a new addition to the System x family, uses an innovative direct water-cooling design to more efficiently cool system components such as processors, memory and I/O. This includes 3 new components---a new compute tray, a new enclosure, and a manifold assembly.The Water Cool Compute tray, called nx360 M5 WCT, is a full-wide, 1U tray that holds 2 compute nodes. Based on the same planar technology as air cool, these nodes also support the E v3 processors, 16x slots of DDR4 memory, and Infiniband FDR support. However, rather than using fans, water is circulated through cooling tubes within the server to cool the processors, memory DIMMs, IO, and other heat producing components, supporting water inlet temperatures up to 45 degrees Celsius, making expensive water chillers unnecessary.x16 ML2 slotWater OutletPCI slot for Connect IBx16 ML2 slotWater InletPCI slot for Connect IBPCI slot for Connect IB45
46 nx360 M5 WCT Compute Tray (2 nodes) – Front Panel 2 compute nodes per tray 6 trays per 6U chassis (12 servers)Dual x16 ML2 slot supports InfiniBand FDR (optional)PCIe adapter support for Connect IB or Intel QDR (optional)GbE dedicated and GbE shared NICFront Panel*Node #1Node #2Dual-port ML2 (IB/Ethernet)KVM ConnectorLEDs(Power, Location, Log, Error)PCI slot for InfinibandDual-port ML2 (IB/Ethernet)PCI slot for InfinibandHere is a view of the front of the water cool compute tray. As you can see, this includes 2 identical side-by-side compute servers with the same front access features, such as dual x16 ML2 slots for FDR, GbE NIC ports, and LEDs.nx360 M5 WCT Compute Tray Specifications:Form Factor:Full-wide node with two planarCPU/Chipset: (Per planar)Dual CPU / Socket R3Processors: Intel Haswell-EP 12/14/16/18 CoreChipset: Intel WellsburgQPI (Quick Path Interconnect)TPM1.2 Chip downNIC: (Per planar)Broadcom 5717 NIC C-step10Gbit support on ML2 slotMemory: (Per planar)16x DIMM sockets – DDR4 2133MHZ256GB assuming 16G DIMMMemory mirroring and sparing supported.Flash DIMM (supported in Ref models)BMC: (Per planar)IMMv2.1 with remote presence support option.SH7758 with videoNM 3.0Card Slots: (Per planar)One x16 ML2 slot (support 50mm height only)Options:ML2 RiserFoD on selected optionsML2 FDR AdaptorSystem Management / SoftwareIMM 2.1Update Xpress, Dynamic System Analysis w/ Integrated RT DiagsServer Guide.PowerEnergy Star 2.0Supports power supplies for current Fei-hu1GbE/ Shared NICKVM Connector1GbE/ Shared NICPower, LEDsRear ViewWater InletWater Outlet* Configuration dependent. Configuration includes Ml2 and PCI adapters.
47 NeXtScale n1200 WCT Enclosure – Water Cool Chassis IBM NeXtScale n1200 WCT EnclosureWater Cool Chassis6U Chassis, 6 baysEach bay houses a full wide, 2-node tray (12 nodes per 6U chassis)Up to 6x 900W or 1300W power supplies N+N or N+1 configurationsNo fans except PSUsFan and Power ControllerDrip sensor and error LEDs for detecting water leaksNo built in networkingNo chassis management requiredFront Viewshown with 12 compute nodes installed (6 trays)Simple architectureSystem infrastructureThe n1200 WCT Enclosure is a new 6U chassis for water cooling containing 6 full-wide bays, each holding one of the full-wide compute trays. This allows 12 compute servers per 6U chassis, same as we have for air cool. It also still includes the 6 power supplies, but no fans are required at all, making for a quieter and lower power environment.Rear fillers/EMC shieldsFan and Power ControllerRear fillers/EMC shields3x power supplies3x power suppliesRear View47
48 nx360 M5 WCT Manifold Assembly Manifolds deliver water directly to and from each of the compute nodes within the chassis via water inlet and outlet quick connects.Modular design enables multiple configurations via sub-assembly building block per chassis drop.6 models: 1, 2, 3, 4, 5 or 6 chassis dropsThe WCT Manifold is what delivers the water to each of the nodes from the CDU. Each manifold section attaches to a chassis and connects directly to the water inlet and outlet connectors for each compute node in order to safely and reliably deliver water to an from each server.The manifold is very modular and comes in multiple configuration based on the number of chassis drops required in the rack. Anywhere from 1 to 6 chassis can be supported in a single rack.n1200 WCT Chassis6 drop ManifoldSingle Manifold Drop(1 per chassis)
49 NeXtScale M5 Product Timeline A Lot More ComingRefresh 1GA: Jan 2015Announce: Sept. 8, 2014Shipments Begin: Sept. 30, 2014GA: Nov. 19, 2014More StorageMore AcceleratorsMore IO OptionsNext Gen Processors4 GPU / 4 HS drive trayEDR supportBroader SW EcosystemOS Support8 additional processorsNVIDIA K80 supportStorage NeX 12Gb SupportMini-SAS port-48VDC power supplyCurrently Shippingnx360 M5 compute node (air cool and water cool)Supports existing 6U chassis (air)New 6U chassis (water)14 processor SKUsPCI NeX support (GPU/Phi)n1200 ChassisNeXtScale is an architecture that will continue to develop for many years. We are currently shipping a dense chassis with choice of compute, storage, and GPU/PCI nodes. In November, we will begin shipping the new M5 compute node as well as the water cool version of NeXtScale with new compute server and chassis. By the end of this year, we will add support for more processors, GPUs, storage tray, and -48 DC power supply. Next year and beyond, we are looking forward to a lot more coming. More storage, more accelerators, more IO innovations, and next generation processors will all come to market and build out an even more powerful and flexible ecosystem.nx360 M4Storage NeXPCIe NeX
50 NeXtScale Solutions2014 LENOVO INTERNAL. All rights reserved.
51 IBM Intelligent Cluster™ Application Ready Solutions simplifies HPC, speeds delivery Developed in partnership with leading ISVs, based on reference architecturesIBM Platform™ LSF®AcceleratorsGridCloudWorkload management platform, Intelligent policy-driven scheduling featuresClusterNetworkingIBM Platform SymphonyRun compute and data intensive distributed applications on a scalable, shared gridStorageIBM has rich portfolio of hardware and management software products specifically designed for high performance computing.From these components,(1) we created solutions to help increase competitive advantages by accelerating product design.(2) The solutions are optimized with lower total cost of ownership(3) They are tailored in specific application areas to provide integrated and end-to-end experiences(4) These solutions are designed for maximize user productivity and minimize administration costBackup: IBM is committed to making HPC available for broad adoption - without sacrificing performance or scalability. We help our clients by providing integrated solutions that lower costs by maximizing utilization and productivity in your environment.ApplicationsIBM Platform HPCCompute nodesOut-of-the-box features reducecomplexity of HPC environmentIBM NeXtScaleSystem™IBM Platform Cluster ManagerIBM Intelligent Cluster™Quickly and simply provision, run, manage, monitor HPC clusters51
52 Next Generation Sequencing IBM Application Ready Solution for CLC bio Accelerate time to results for your genomics research“It has been a pleasure to work with IBM, optimizing our enterprise software running on the IBM Application Ready Solution for CLC bio platform. We are proud to offer this pre-configured, scalable high-performance infrastructure with integrated GPFS to all our clients with demanding computational workloads.”- Mikael Flensborg, Director of Global Partner Relations CLC bio, A QIAGEN® CompanyEasy to use, performance optimized solution architected for CLC bio Genomic Server and Workbench softwareClient support for increased demand for genomics sequencingDrive down cost, speed up the assembly, mapping and analysis involved in the sequencing process with integrated solutionModular solution approach enables easy scalability as workloads increaseLearn more: solution brief, reference architectureUse CaseNext Generation SequencingWorkload size15 human genome (37x) or 120 human exome (150x) per week30 human genome (37x) or 240 human exome (150x) per week60 human genome (37x) or 480 human exome (150x)/wkHead Node - x3550M4SingleDualCompute – # x240 or nx360 nodes3612Compute - Disk per Node2 TBCompute - Memory per Node128Storwize V7000 Unified (TB)20559010 Gigabit switch / adaptersyesManagement SoftwareIBM Platform HPC, Elastic Storage (based on GPFS)These configurations / sizings are based on IBM Flex System x240 with Intel IVB processors (2 x Intel Xeon E5-2680v2 115W 2.8GHz).Currently the reference architecture is based on sandybridge processors, but work is underway to update the reference architecture to match the IVB configs in this table and in the IBM CLC bio solution brief published Dec 2013.52
53 IBM Application Ready Solution for Algorithmics Optimized high-performance solution for risk analyticsEasy to use, performance optimized solution architected for IBM Algorithmics Algo One solutionSupported software:Algo Scenario EngineAlgo RiskWatchEasy to deploy integrated, high-performance cluster environmentBased on “best practices” reference architecture, lowers riskUser friendly portal provides easy access , control of resources“Many firms will benefit from the Application Ready Solution for Algorithmics to accelerate risk analytics and improve insight . This solution helps lower costs and mitigate IT risk by delivering an integrated infrastructure optimized for active risk management”.— Dr. Srini Chari, Managing Partner, Cabot PartnersAlgo Credit ManagerAlgo Risk ApplicationAlgo Aggregation Services (Fanfare)Read theanalyst paperUse CaseAlgo One Risk AnalysisWorkloadSmallMediumLargeManagement Node for Algo / PCM11 (can be shared)Management Node for Symphony2 (shared)Compute Server – x240 or nx36061436 (or more)Compute Cores (total)96224574Compute – Total memory (GB)76817924608Elastic Storage (GPFS) ServersNone2Storage – V3700 SAS; shared stgGPFS 31 TBGPFS, 62 TBGPFS, 124 TBFDR IB switch / adapters Network10 GbESoftwareIBM Platform Cluster Manager (PCM), IBM Platform Symphony, Elastic Storage, DB2 enterprise (opt.)This reference architecture is based on Flex System x240 of IBM NeXtScale System with Intel IvyBridge processors (E V2).As you can see there are three different configurations for this solution, for Algo risk scenarios in various levels of complexity (simple, medium, and high) as the performance/size of the cluster depends on the response time required along with the the number of potential positions held and their dependencies on each other. This solution assumes Storwize V3700 SAS attached storage with GPFS file and data management clients and servers.53
54 IBM Application Ready Solution for ANSYS Simplified, high-performance simulation environment Computational Fluid Dynamics (Fluent, CFX)Structural Mechanical (Ansys)Workload size1 job 15+M cells using all 120C6 jobs 2.5+M cells each1 job 25+M cells each using all 200C10 jobs 2.5+M cells each1 job 200+M cells using all 840C20 jobs 10+M cells each4 large jobs 2–5 MDOF each10 large jobs 2–5 MDOF each15 large jobs MDOF eachHead Node – nx360 M4SingleCompute– nx360 M4Quantity61042ProcessorE v2 10CE v2 10CE v2 8CMemory128256DiskDiskless2 x 800 GB SSD2x800 GB SSDGPU Node*– PCI NeXGPU2 x NVIDIA K402 x K40Visualization*-NVIDIA GRID K2K22 x K2File System*DS3524yesNetworkGigabitFDR IBnoManagement SoftwarePlatform HPC or Platform LSF; Elastic Storage (GPFS File System (opt))* OptionalConfiguration shown is based on IBM NeXtScale System™. IBM Flex System™ x240 with E V2 Compute nodes is also available. Both systems are available to order as IBM Intelligent Cluster™. To Learn more: read the solution brief and reference architecture.
55 Call to ActionLead with NeXtScale on all x86 Technical Computing (HPC) opportunitiesLook for NeXtScale opportunities in Cloud Computing, Datacenter Infrastructure, Data Analytics, and Virtual DesktopEvaluate customer’s energy efficiency requirements to assess if Water Cooling is appropriate for themUtilize customer seed systemsLearn more about NeXtScale from the links on the Resources pageSo here’s a few points I want to make as a Call to Action:First is to lead with NeXtScale on all of your x86 HPC opportunities.Also look for opportunities where NeXtScale can be used in public and private cloud, as well as datacenter infrastructure, data analytics, and virtual desktop. In fact, NeXtScale is now our lead vehicle for 3D VDI opportunities.Make customers aware of our new water cool offering and learn more about their efficiency requirements. Water Cool Technology might be right for themNext, please engage us for seed systems for NeXtScale M5. We do have a loaner pool established for NeXtScale M5 and we will be more than happy to send seeds as needed to your clients.If you have a unique bid that requires some analysis on Power and HVAC data, please engage Lab Services for help on these unique requirements.Also, don’t forget to leverage BidWinRoom it’s a great forum that will help you take the cost out of your total solution.And again, learn more about NeXtScale from the links on the previous Resources page. There you will find a sales kit for IBMers and business partner sellers. There is a lot of useful content out there for you to know more about the NeXtScale offerings.
56 IBM NeXtScale M5 – Resources / Tools Product Resources:Announcement Page LinkAnnouncement Webcast (replay) LinkProduct Page LinkData Sheet LinkProduct Guide LinkVirtual TourAir Cool LinkWater Cool LinkProduct Animation LinkInfographic IBMBlog LinkBenchmarks:SPEC_CPU NeXtScale nx360 M5 with E v3 LinkSPEC_CPU NeXtScale nx360 M5 with E v3 LinkSales Tools:Sales Kit IBM PWSeller Presentation IBM PWClient Presentation IBM PWSales Education IBM PWTechnical Education IBM PWSeller Training Webcast: NeXtScale M5, GSS LinkVITO Letters IBM PWQuick Proposal Resource Kit LinkClient Videos:Caris Life Sciences LinkHartree Centre LinkUniv of Notre Dame Link Analyst PapersCabot Partners: 3D Virtual Desktops by Perform LinkIntersect360: Hyperscale Computing- No Frills Clusters at Scale Link56