Presentation is loading. Please wait.

Presentation is loading. Please wait.

Beyond Marvel & Superdome: HP’s Next-Generation Server

Similar presentations

Presentation on theme: "Beyond Marvel & Superdome: HP’s Next-Generation Server"— Presentation transcript:

1 Beyond Marvel & Superdome: HP’s Next-Generation Server
HP World 2004 Presented by Terry C. Shannon to HP Marketing & BeLux User Group Publisher, Shannon Knows HPC

2 The Fine Print The following text reflects legal requirements
HP World 2004 The Fine Print The following text reflects legal requirements I am not nor ever have been an HP or Intel Employee I am not being compensated to deliver a marketing pitch This presentation reflects my opinion, not HP’s opinion No NDA material is contained in this presentation I strive for accuracy, but offer no guarantees Trust but verify: always get a second opinion Make no purchasing decisions based on this session Always check with HP before planning your purchases Above all, enjoy the presentation Please excuse my “American English” Comments?

3 Session Agenda And some thoughts on what to do with all that power!
HP World 2004 Session Agenda Where HP is today Current Superdome, PA-RISC and IPF Technology Anticipated Improvements in the next several years What HP must do to remain competitive in ~2008 Gotchas… HP can’t reinvent the wheel Critical technology rules of thumb Design criteria Getting from 2004 to 2008 SKHPC’s speculation about the HP server of 2008 And some thoughts on what to do with all that power!

4 Future Server, Current Progeny
HP World 2004 Future Server, Current Progeny Today’s BCS enterprise systems portend the future Alpha Superdome Integrity Superdome in more detail Superdome in the next three years Superdome’s role in the future server And some things you can do with all that performance!

5 HP Superdome Family… Today
64-way with Expanded I/O 8 to 64 CPUs 64 to 512 DIMM slots 48 to 192 PCI slots 1 to 16 nPartitions 16-way 2 to 16 CPUs 32 to 128 DIMM slots 24 to 48 PCI slots 1 to 4 nPartitions 32-way 4 to 32 CPUs 32 to 256 DIMM slots 24 to 48 PCI slots 1 to 4 nPartitions 64-way 8 to 64 CPUs 64 to 512 DIMM slots 48 to 96 PCI slots 1 to 8 nPartitions

6 HP Superdome: Built for the Long Haul
HP World 2004 HP Superdome: Built for the Long Haul Itanium 128 CPUs 1 TB RAM HP-UX. Windows, Linux, VMS in the future PA-8800 128 CPUs 1 TB RAM hp-ux online replacement and deletion of cell boards Itanium®* 64 CPUs 512 GB RAM hp-ux, Windows, Linux PCI-x online addition of cell boards cell local memory PA-8700+ 875 MHz 64 CPUs 256GB RAM new I/O cards PA-8700 98 percent performance increase to 389k TPC-C Given our combined strengths in the UNIX market, you will be able to take advantage of leadership functionality and performance as well as support by a very large base of market-leading ISVs and applications. We plan to continue to offer the best UNIX in the industry as we integrate leading features of Tru46 UNIX into HP-UX. 2H 2001 Mid- 2002 Today 1H 2004 Soon *based on the Itanium processor available at that point in time

7 Superdome system view Blowers Blowers Backplane Power Cell Board
HP World 2004 Superdome system view Blowers Backplane Power Board Cables Utilities I/O Chassis (2/4) PDCA Cable Groomer Blowers Cell Backplane I/O Fans I/O Chassis Power Supplies Leveling feet

8 An Inside Look at Superdome

9 SD Interconnect Fabric: Crossbar Mesh
HP World 2004 SD Interconnect Fabric: Crossbar Mesh Fully-connected crossbar mesh four crossbars four cells per crossbar All links have equal bandwidth and latency minimizes latency maximizes usable bandwidth Implements point-to-point packet filtering and routing network allows hardware isolation of all faults interconnect 16 cells with 3 latency domains cell local ~200ns crossbar local ~300ns remote crossbar ~350ns Superdome Cabinet Superdome Cabinet I/O I/O Backplane Backplane Cell Cell Crossbar Crossbar Crossbar Crossbar Page 9

10 Chipsets Count in Cellular Architectures
HP World 2004 Chipsets Count in Cellular Architectures I/O card cage—12 PCI-X slots I/O card cage—12 PCI-X slots To cell 4 I/O To cell 3 To cell 5 I/O To cell 6 Cell 1 Cell 8 To cell 2 To cell 7 chipset cell controller Cell 1 Cell 8 HP sx1000 chipset cell controller Crossbar interconnect switch Memory Memory To cell 9 To cell 16 To cell 10 To cell 15 To cell 11 1.5 GHz Intel Itanium 2 processors 1.5 GHz Intel Itanium 2 processors To cell 14 To cell 13 To cell 12

11 Inside Superdome: New sx1000 chipset
Supports PA 8800, 8900, and all Itanium CPUs Cell controller, cell board, and CPUs change (orange) Memory DIMMS, I/O connection, and crossbar connection remain the same All other system infrastructure (frame, backplane, I/O chassis, etc.) is preserved Optional PCI-X I/O slots can also be used Six Slots Cell MEMORY Processor Board Dependent PCI PCI Hardware IPF E …..... E IPF Cell Controller V2 IPF I/O Controller IPF E …..... E PCI PCI Crossbar Six Slots Cross Bar I/O Backplane

12 The role of the sx1000 in Superdome
HP World 2004 The role of the sx1000 in Superdome The heart of all HP cellular systems, linking CPUs Memory I/O Crossbar interconnect to other cell boards One chipset per cell board 4 CPUs per cell board Sx1000 supports 8 CPUs per cell board Enables 128-way Superdome systems

13 HP Delivers Dual-Core Before Intel
HP World 2004 HP Delivers Dual-Core Before Intel PA-RISC PA-8800 2 cores, one chip Each core has own L1 cache 32M shared L2 cache Uses high-bandwidth Itanium system bus Uses same socket and HP chipsets as Itanium 2 Itanium – HP invents a double-density Madison Hondo project – now known as mx2 Two Madison CPUs and HP technology Results – 2 Itanium processors in a single socket Count sockets, not CPUs

14 How Did HP Upstage Intel?
HP World 2004 How Did HP Upstage Intel? Itanium is a joint venture between Intel and HP HP co-owns certain Itanium intellectual property HP’s High Performance Systems Lab is truly brilliant Richardson, TX: If it’s been invented, then make it better Investigate, innovate, and improve to achieve leadership How HP’s 3-I approach rendered “Hondo” the mx2 Itanium cartridge packaging was not at maximum density CPU packaged on a carrier board, power at one end The Intel CPU and package could occupy less space HP designed a cache, controller and carrier board HP placed the power supply atop the module and…

15 Field-stripping an mx2 module
Mixing is allowed between Madison, HP mx2 module, Madison9M Intel CPU “carrier” board Industry-standard Itanium2 power pod (DC to DC power conversion) Intel CPU chip inside package HP CPU “carrier” board HP external cache and controller chip HP power solution (goes on top rather than on the end)

16 HP Integrity Cellular System Progress
64-way Superdome 128-way scalable (cell-based) servers 16-way rx8620 32-way 8-way 16-way rx7620 Focus is on Cellular Systems, CPU Count Reflects mx2 CPU 4-way Montecito Dual Core Late 2005 2002 2003 2004 Itanium 2 McKinley 1 GHz Madison 1.5 GHz Madison 9M 1.6 GHz

17 Superdome Performance Projections
Integrity Enhancements: CPUs follow Intel roadmaps Dual CPU motherboard doubles IPF count in ‘04 Mixed Madison CPUs supported In-cab interconnect upgrades to sustain scalability In-cab upgrades from PA-RISC Estimated System OLTP performance 2003 2004 2005 2006 Estimated Relative Perf Superdome performance modeled in example.

18 How Superdome will maintain its leadership in the next 3-4 years
HP World 2004 Double overall system performance Increase system availability Eliminate single points of failure Increase reliability Add resiliency features Enhance system scalability More memory Improved performance scaling Improve system manageability What kind of attributes will allow Superdome to maintain its leadership position in the marketplace ? First, the system performance will need to double, at minimum, to maintain our competitive advantage Second, Increasing system availability by eliminating single points of failure, increasing the reliability of system components, and by adding new resiliency features. Third, enhancing scalability by increasing the memory depth, and by improving performance scaling through reduced access latency, improved interconnect bandwidth, and by taking advantage of new processor features Lastly, it becomes vitally important to improve upon the existing platform management infrastructure to continue to make a complex machine like HP’s Superdome easy to administrate, configure, and maintain.

19 IPF and PA-RISC processor roadmap
HP World 2004 IPF and PA-RISC processor roadmap Montvale Dual Core Itanium Family (HP Exclusive) Montecito Dual Core Itanium Family PA-RISC Family HP mx2 Dual CPU PA-8900 Dual Core Madison “9M” Madison 1.5 GHz PA-8800 Dual Core Itanium 2 1.0 GHz Doubling system performance represents a substantial hurdle. First, HP must start with a powerful processor. Shown here is the evolutionary path of the Itanium Processor Family along with HP’s PA-RISC processor roadmap. With the sx1000-based Integrity Superdome offering Madison, the mx2 dual CPU module, and Madison 9M in it’s lifetime, and the sx1000-based HP9000 Superdome offering the powerful dual-core PA-8800 processor, the next generation platform needs to aim higher on both IPF and PA-RISC processor fronts to meet our customer’s needs. <click> The newer processor technology available from Intel’s Itanium Processor Family will be a dual-core IPF CPU code-named Montecito featuring a 50% higher core frequency and hyper-threading, and the final member of the venerable PA-RISC processor family, the dual-core PA-8900, will be available for our next generation design. Even beyond Montecito lies yet another IPF product, code-named Montvale which will continue to provide leadership CPU performance well beyond 2006. Itanium 800 MHz PA-8700+ 875 MHz This roadmap is subject to change without notice. Dual Core means two CPU cores per silicon chip. Dual CPU means two packaged CPUs per processor module. PA-8700 750 MHz 2001 2002 2003 2004 2005 2006

20 Future “Arches” chipset vs existing sx1000
HP World 2004 sx1000 chipset “Arches” chipset Cell Board Cell Board MEMORY MEMORY After picking our target processors, HP also needs to design a chipset that can deliver the reduced latency and increased bandwidth necessary to unleash system performance. <click> Increase the CPU bus interface frequency from 400 MT/s to 533 MT/s to improve the CPU bandwidth and reduce access latency. <click> Change memory technology to DDR-II SDRAM. This doubles the memory bandwidth and reduces memory access latency thereby reducing CPU idle time. The newer technology also allows HP to quadruple the system’s memory capacity which will minimize I/O thrashing. <click> The I/O system, too, must be improved to maintain system balance. The aggregate I/O bandwidth is more than tripled by utilizing emerging high-speed serial link technology to connect the Cell to the I/O system. Individual card slot bandwidth is doubled by designing a PCI-X MHz local bus adapter chip to support 10 Gb Ethernet cards, multi-port 4 Gb Fibre-channel cards, or potentially others. The Arches chipset even provides direct hardware support for IPF Write Coalescing instructions to reduce the I/O bus overhead penalties of small data transfers and ensure higher I/O bus utilization. <click> To make the system chipset complete, HP will use the same high-speed serial link technology employed in the I/O link to quadruple the system crossbar interconnect bandwidth and further reduce crossbar latency. The crossbar provides the fabric through which the Cells are interconnected to each other, and the greatly increased bandwidth particularly benefits large partitions. memory bandwidth increase PA-8800 or Itanium® 2 PA-8900 or Itanium® 2 CPU bus bandwidth increase PA-8800 or Itanium® 2 PA-8900 or Itanium® 2 Cell Controller Cell Controller PA-8800 or Itanium® 2 PA-8900 or Itanium® 2 PA-8800 or Itanium® 2 PA-8900 or Itanium® 2 I/O bandwidth increase crossbar bandwidth increase PCI-X System Bus Adapter PCI-X System Bus Adapter Crossbar Crossbar BX BX = PCI-X host bridge BX BX2 BX2 BX2 = PCI-X 2.0 host bridge Crossbar Switch Connections Crossbar Switch Connections PCI-X or PCI PCI-X or PCI 2x slot bandwidth PCI-X 2.0 PCI-X 2.0

21 “Arches” system summary
HP World 2004 “Arches” system summary Increased Single System HA Double DRAM sparing (tolerate failure of 2 DRAMs on a DIMM) Multi-path crossbar topology Link level retry with ‘self-healing’ on crossbar and I/O links Greater fault protection within the CPU (cache repair-on-the-fly) Redundant DC/DC converters and system clocks Less than 15 minute removal and replacement of a FRU for one person after failed FRU is identified and before boot sequence begins Performance & Scalability More powerful processors Bandwidth increase in: CPU Bus Memory I/O I/O slot Crossbar Memory capacity increase Latency reductions So, I’ll perform a quick summary of all the changes we’re making for the “Arches” generation of Superdome: Better performance and scalability is assured by the use of even more powerful processors, huge increases in bandwidth alongside with substantial reductions in latency all across the entire system design. System resiliency and reliability will be improved because we’ve not only learned from our past, but have aggressively adopted new technologies that allow us to further enhance our system’s design. In the end, I’m really showing you how HP has made a serious effort to protect your valuable IT investments. Our strategy for offering “in-box” upgrades, as opposed to the “fork-lift” variety favored by our competitors, offers our customers truly affordable information technology. Not only are the upgrades less expensive than new system designs, fitting our new designs into the available power and thermal windows of the chassis gives assurance to Superdome customers that they can build their data centers’ power and cooling infrastructures, and expect them to last. Investment Protection Field upgradeable into existing customer systems, no chassis swap Fits within the available power / thermal windows of existing Superdome systems

22 Three generations of Superdome…
HP World 2004 Three generations of Superdome… same frame same frame The first generation of Superdome, released in late 2000, was a 64-way capable HP-UX only machine that supported up to 256 GB of memory, 64-bit 66 MHz PCI slots, and introduced the concept of true hard partitions to the marketplace. The chipset for this generation, code named Yosemite, supported three consecutive PA-RISC microprocessors, the PA-8600 at 552 MHz, the PA-8700 at 750 MHz, and the PA at 875 MHz. <click> In late 2003, the sx1000 chipset brought IPF processor compatibility to Superdome, and the ability to support multiple Operating Systems (HP-UX is now joined by 64-bit Windows, Linux, and OpenVMS). Two denser DIMM sizes has brought the maximum main memory size up to 1 TB, and a new set of I/O bus adapter chips provides compatibility with PCI-X and a doubling of the peak per slot bandwidth to 1 GB/sec. The Itanium Processor Family members supported by the sx1000 include the 1.5 GHz, 6 MB Madison, the 1.7 GHz, 9 GB Mad 9M, and HP’s exclusive IPF processor offering – the mx2 dual Madison module. The PA-RISC processors supported by the sx1000 are the PA-8800, and the future PA-8900, both dual-core offerings which, like the mx2, allow the system processor count to reach 128. <click> Targeted for late 2005, the third generation chipset for Superdome will be made available. Like its predecessor, the “Arches” chipset will support up to 128 PA-RISC or IPF processors, with the same four Operating Systems, along with support for DDR-II memory for a maximum system capacity of 4 TB (limited only by memory device availability). PCI-X MHz slots will enable the system to take advantage of 10 Gb Ethernet and multi-port 4 Gb Fibre-channel cards, and high-speed serial link technology provides the framework for large increases in aggregate I/O and crossbar bandwidth. New features will improve the reliability and resiliency of the platform over Superdome’s already high standard of Single-System High-Availability. IPF and PA-RISC 128 CPU, 4 TB Memory DDR-II Memory, PCI-X (266) Increased BW for Mem, IO, Xbar Enhanced Resiliency, SSHA HP-UX, Windows, Linux and OpenVMS PA-RISC Only 64 CPU, 256GB Memory nPars PCI OL* HP-UX Only IPF and PA-RISC 128 CPU, 1TB Memory PCI-X (133) HP-UX, Windows, Linux and OpenVMS Yosemite 4Q 2000 sx1000 4Q 2003 Arches 4Q 2005

23 Performance boosts with in-box upgrades
HP World 2004 Performance boosts with in-box upgrades Over 2M The first generation Superdome chipset, Yosemite, allowed three PA-RISC processors to deliver TPC-C benchmarks first at 197,024 tpmC with the 552 MHz PA-8600, then at 389,434 tpmC result using the 750 MHz PA-8700, then an impressive 541,674 tpmC result using the 875 MHz PA The sx1000-based Superdome populated with “Madison” Itanium2 processors running at 1.5 GHz and possessing a 6 MB third level cache has already demonstrated leadership results on both Windows/Sequel and HP-UX/Oracle. With sx1000 system performance disclosures for the 1.7 GHz, 9 MB version of Madison, the mx2 dual processor module, and even the dual-core PA-8800 still pending, several more opportunities for HP to further distance itself from the competition still remain, especially when the Operating Systems catch up with the hardware capability supporting 128-processor configurations. A preliminary estimate of the “Arches” based system’s 64-way performance comes in at approximately 2.2 million tpmC populated with Montecito processors, and given that the system is capable of running coherently with a single 128-way image, a tpmC result in excess of 3.5 million should be achievable. The real hidden beauty about this increase in performance is that this will all have been achieved through “in-box” upgrades. The infrastructure released with Superdome in 2000 will continue to support a machine with greater than 20 times more total OLTP processing power in 2006. Estimate HP-UX Win

24 Increase single system high-availability
HP World 2004 Increase single system high-availability Keep it running Fix it fast N+1 features (hot swappable) 1. Cabinet blowers, I/O fans 2. Bulk dc power supplies 3. All cell, backplane, I/O power converters 4. Redundant ac input power (optional) 5. Multiple xbar switch fabrics 6. Redundant global clock source Diagnostic features: 1. Test station - asic level scan tools - remotely accessible via lan 2. Enhanced predictive support 3. High availability observatory 4. Ems monitoring system 5. Dynamic processor resilience (DPR) 6. Dynamic memory resilience (DMR) 7.Hot-swap service processor Increasing Superdome’s availability required that we not only take advantage of every possible opportunity available through changes in technology, it also demanded that we look back, retrospectively, to examine our own field replacement data. One item, DC-DC voltage converters, has been a cause for service calls, and this actually reveals an industry-wide challenge. While DC-DC converters evolve to provide more capacity and greater efficiency in smaller packages, their inherent reliability has not improved at the same rate. By designing in N+1 DC-DC converter redundancy on all the voltage levels required in our system, we manage to effectively provide a solution for keeping the hardware running even in the event of a brick failure. The multi-path design of our new system crossbar fabric allowed us to provide the capability to route traffic around failed crossbar ASICs. We designed the global clock source not only to be fully redundant, but hot-swappable as well. Our processors will provide the capabilities to perform internal cache repairs without required reboots. Our I/O sub-system becomes more resilient by enabling ECC protection on the PCI-X 2.0 mode 2 slots The high-speed serial link technology we employ for the I/O and crossbar routing allows us to build-in redundant channels, and provides for automatic failed packed transfer retries. This gives out links the ability to “heal” themselves when detecting persistent errors. The sheer number of DIMMs, 512 in a maximum configuration, makes DIMMs a common service point. Superdome already delivered DRAM chip-sparing which allowed the failure of an entire DRAM device per DIMM. HP Labs has produced new ways to enhance our ECC encoding so that we can now lose two DRAMs per DIMM and still correct the resulting data errors. We’ve called this feature “double chip-sparing”, and it actually works like memory RAID at the DRAM level - only it doesn’t require the customer to buy 25% additional memory capacity just to enable. In addition, where Superdome’s DIMM design provided address and control signal parity checking, the new system has also provided redundant signal contacts to further enhance memory resiliency. Error correction 1. Ecc on CPU cache & FSB. Cache repair on the fly 2. Ecc protected PCI-X 2.0 3. Link level retry with “self-healing” xbar & i/o pathways 4. Ecc on all fabric and memory paths 5. Double chip-spare memory Fault isolation technologies Online addition, replacement: 1. cell assemblies* 2. PCI i/o cards * note: os version dependent

25 Enhance system scalability
HP World 2004 Enhance system scalability Dual-core processors double CPU count per “socket” Hondo today. Montecito end of 2005 Hyper-threading may allow more scheduling options 4x increase in memory capacity Across-the-board latency improvements reduce and “flatten” access time profile for large partitions OS and application improvements inevitable Enhancing the scalability of our system is primarily a function of the performance features I’ve already mentioned. The dual-core nature of our target processors doubles the CPU count on a per-socket basis. With Montecito, the capability to execute on dual threads, or hyper-threading, is introduced. The CPU execution pipeline can operate on an additional instruction thread while it waits for cache requests to be serviced from another, rather than sitting idle. While we can’t accurately predict what effect this capability will have on our systems, it may indeed give us the opportunity to schedule processes more efficiently. The quadrupling of memory capacity will obvious reduce I/O thrashing, and the focus of improving latency reveals its real benefits by reducing and “flattening” the access time profile across the entire system. Hardware folks, like me, continue to underestimate our OS and application partners’ ability to extract performance out of our system designs, but such improvements in machine utilization are inevitable.

26 Improve system manageability
HP World 2004 Improve system manageability Integrate existing rich feature set into Web-based server management framework (Insight Manager) Add capability to update firmware without shutting down the OS (new firmware activated on next boot) Add I/O card slot doorbells and slot power latches Enhance Security features Secure Shell (SSH) management console Authenticated firmware update Trusted Platform Monitor (TPM) device to store encryption keys Three of our core values for maintaining Superdome’s leadership position in the marketplace, doubling performance, increasing availability, and enhancing scalability, have been addressed. The fourth ensures that we can deliver a system of this complexity, and yet still provide a suite of platform management tools that allow our customers to effectively manage, configure, and maintain their systems. We are integrating our platform management software into the web-based Insight Manager framework, allowing Superdome to be managed and monitored by the same tools available for our Proliant servers. We’ll add the capability to update system firmware while the box is still up and running, and then allow for rebooting at a later, more convenient opportunity to have the new firmware take effect We’ll add I/O card slot doorbells and power latches to make the look and feel of Superdome consistent with low-end and mid-range product offerings. These capabilities were available previously at the management console only. We even plan on enhancing the security features embedded into the system itself: a Secure Shell (SSH) management console, firmware updates that require full authentication, and even a TPM encryption key device to enable future secure features

27 Superdome: built for the future, investment protection today
HP World 2004 Superdome: built for the future, investment protection today Next Generation High-End System IPF only (no PA) Leadership Performance Leadership HA Leadership Partitioning Upgrade to Montvale (performance boost) Up to 4 TB memory PCI Express HP-UX, Windows, Linux, OpenVMS Arches chip set with higher bandwidths and lower latencies 128 CPUs Montecito and PA-8900 Up to 2 TB memory HP-UX, Windows, Linux, OpenVMS Superdome customers really do enjoy the best of both worlds: Record-breaking performance, unprecedented high-end server resiliency and reliability, and multi-OS flexibility all come with true affordability and IT investment protection because the machine was designed with the capacity to grow. Superdome’s modular design from the very beginning has allowed HP to offer it’s customers a machine that is truly intended to evolve right along with their ever-changing business needs. As can be seen here, Superdome’s future is very bright indeed – so feel free to go ahead and buy as many as you like – we’ll make more. 128-way scalability with PA-8800 and mx2 CPUs Madison 9M HP-UX, Windows, Linux, OpenVMS 128 CPUs (2 64-CPU partitions) PA-8800 CPU Mx2 CPU 1 TB RAM HP-UX PCI-X 1H 2004 2H 2004 2H 2005 2H 2006 2H 2007 calendar year

28 And Now It’s Time for Something New…
Alpha, PA-RISC, and MIPS will run out of gas in the timeframe and become obsolescent also-rans. The NED already has plans for a future system in 2005 HP’s High Performance Group must follow suit The Alpha Abdication killed Compaq’s next-generation “Avalanche” and “Snowball” high performance efforts. So the ball is in HP’s court now… Reliance on Itanium technology takes processor performance out of the picture. Hence, HP will differentiate above the CPU level…

29 Itanium, Integrity, and the Future
HP World 2004 Itanium, Integrity, and the Future 64-bit computing imposes new demands Performance Memory Scalability Reliability HP Integrity systems meet these demands Today And even more so in 2008

30 Integrity Matters… Omit Nothing, Improve Everything!
Keep the system running 24 x 7 Redundancy, N+x components Hot-swappable fans, blowers, Power supplies, backplanes, etc. Emphasize error correction CPU cache, chipkill Parity protection for CPUs and I/O ECC for all fabric and I/O paths Redundant memory and A/C power If it breaks, fix it fast Diagnostics Fault Isolation

31 Focusing Innovation Beyond the CPU
HP World 2004 Focusing Innovation Beyond the CPU beyond-the-box R&D investment R&D Investment Timeline PA-RISC Alpha MIPS Itanium IA 32

32 Future Design Considerations
Standardize on Itanium Utilize standard fabrics and interconnects Leverage component scalability to yield truly balanced systems Incorporate latest standard technologies to cut costs Ensure Customer investment protection Deliver value added differentiatiation I/O Chassis Host I/O Controller Mem Controller System Interconnect CPU ….

33 Gating Factors in Future System Design
HP World 2004 Gating Factors in Future System Design Key customer concerns Economics Price and price/performance Low TCO Investment protection Interoperability Reliability Scalability to handle any workload Manifestation of AE strategy Grid and AE enabled Scale out vs scale up issues

34 Gating factors in future server design
HP World 2004 Gating factors in future server design Management issues HP must provide an attractive alternative to competitive management products and plans IBM’s Eliza and e-Services offerings Sun’s nascent N1 strategy HP’s alternatives include OpenView and SIM Virtualization technology AE and UDC management tools And more to come, from HP Labs and acquisitions

35 Gating factors in future server design
HP World 2004 Gating factors in future server design Rely on standards and existing parts where possible Heterogeneous OS is critical to server consolidation Essential to AE plans Most customers run multiple OSes and platforms HP must accommodate UNIX Linux OpenVMS NSK Windows

36 HP’s future server game plan
HP World 2004 HP’s future server game plan Note: the following information is based on SKHPC’s analysis of current products, announced roadmaps, and public data. As such, this material is conjectural. For now… HP’s Goal: Design and Develop The Mother of all Enterprise Servers SKHPC assessment of product development and attributes Produced by ~50-person IPF development section as part of the High Performance Systems Lab (HPSL) in Hardware Systems Technology Division (HSTD). Distributed in SHR, MRO, Colorado, Richardson Two teams are addressing specific issues

37 HP’s future server game plan, ctd..
HP World 2004 HP’s future server game plan, ctd.. The two teams Advanced development team focused on roadmap linkage, definition, architecture, topology, performance work, as well as base technology assessment and proofs of concept An implementation team to deliver an SPU (system processing unit) consistent with current HSTD development practices including boards, signal integrity, mechanical, power, thermal, utilities, etc. The team will leverage existing HP ASICs, CPUs, diagnostics, I/O options, designs and components wherever possible.

38 HP Superdome successor, 2007-2008
HP World 2004 HP Superdome successor, SKHPC’s vision CPU counts to >128 processors which allows HP to deliver SC-like systems in a box partitioned all the way down to individual processes using the continuum of partitioning technologies An order of magnitude improvement in availability with proactive diagnosis tools that anticipate possible failures and automatically correct through re-configuration or early replacement of failing components Incredibly large cache and memory sizes allow you to run almost any app in memory at the fastest possible speed Easily clustered, interoperable, and a key HP AE component (Built on inexpensive industry standard components and able to run all HP OSes and the full set of industry applications)

39 Aims and goals of “Son of Superdome”
Strategy: innovate and expand existing technology Results: A next-generation system whose design and appearance is Superdome-centric with Alpha attributes Likely physical characteristics: Appearance: Superdome-like but much larger Design: Superdome-centric with Alpha attributes Results and Goals: Superdome successor with substantially higher performance and scalability than the incumbent Deliver the first, true, open-systems alternative to proprietary mainframe and parallel cluster technology. And, a new ability to open a large can of industry-standard whoop-ass on IBM.

40 More aims and goals.. Power and fault-resilience and more
Power up once, no reboot, non-NUMA architecture Passive backplane, no performance compromise Online memory, CPU, I/O, CPU, fabric replacement Fault-tolerant fabric, smaller cell boards No multipartition single point of failure Low latency, multi-OS support Dynamic workload balancing Change hard partitions on the fly Grow/shrink, add/delete partitions, firewall security Reconfigure only by trusted firmware No denial of service attacks… and more

41 Managing the server of the future
How do you manage what does not yet exist? Most of the tools and building blocks already exist HP can run a UDC today The Adaptive Enterprise suite is being fleshed out OpenView is being extended HP’s partition magicians have a big bag of tricks today Probable management environment Take the OpenView foundation available now Add partitioning, COD, and AE tools available in The results: the next-generation system “control panel” Thanks, and have fun visualizing the future!

42 The magic that makes it happen
Virtualization… the enabler for AE and UDC HP Virtualization continuum AE UDC… now downsized for BladeServers

43 A Few Words About AE and UDC
HP World 2004 A Few Words About AE and UDC HP’s Adaptive Enterprise is a Key Differentiator AE is an adaptable standards-based IT framework that lets users automatically and dynamically allocate and reallocate their IT infrastructures in response to changing patterns of utilization and changing business, while eliminating the need to “overprovision” via buying standby equipment for “spikes”. It’s a work in progress. HP’s Utility Data Center went beyond AE, and was to delivers IT resources on demand. HP’s UDC is reality at HP today (3 locations), and is far more comprehensive than rival “IT as a Utility” efforts from Sun (N1) and IBM’s e-Services offerings. It’s also far too complex. HP has downsized UDC to BladeServers to meet user needs All offerings rely on virtualization and advanced mgmt tools. Last week: Virtualization agreement with VERITAS

44 A Quick Look at Virtualization
HP World 2004 A Quick Look at Virtualization An approach to IT that pools and shares resources so utilization is optimized and supply automatically meets demand Strategic importance Business value Virtualization Innovation Today and Tomorrow Complete IT Utility (Utility Data Center) Integrated virtualization (Adaptive Enterprise) Element virtualization Servers Storage Network Software

45 HP Virtualization Solutions
HP World 2004 HP Virtualization Solutions Element Virtualization New: Pay-per-use (PPU) now available for imaging and printing and HP Integrity Superdome New: The family of HP Serviceguard high-availability clustering software adds several new offerings Enhanced: HP-UX Workload Manager (WLM), the intelligent policy engine for Virtual Server Environment Integrated Virtualization New: Blade PC powers the HP Consolidated Client Infrastructure Enhanced: BEA and Oracle integrated with HP Virtual Server Environment New: ServiceGuard extension virtualizes resources across data centers up to 100kms apart with a single Oracle RAC database New: iCOD for Blades activate servers as needed by customer Complete IT Utility (Now Blade-Based) New: Tiered Messaging on Demand, part of the HP On Demand Solutions portfolio New: Automated Service Usage, part of HP Integrated Service Management Servers Storage Network Software

46 What WAS HP’s UDC? Too Complex!
HP World 2004 What WAS HP’s UDC? Too Complex! storage virtualization network virtualization internet server pool storage pool NAS pool load balancer pool firewall pool switching pool utility controller server virtualization HP UTILITY DATA CENTER - virtualized pools of resource for instant ignition - failover protection and data replication to protect servers, storage and network - wire-once fabric - utility controller software for service definition and creation HP’s Utility Data Center offers a set of highly-available, virtualized resources which enables: Speedy definition, configuration and implementation of IT services Better utilization of servers, storage and networking elements because resources are shared more effectively across multiple applications – with less need of unused capacity Resources will self-optimize to meet the demands of dynamic workloads Most operations can be automated to reduce error and improve the efficiency of your IT administrators New applications and systems can be ignited within minutes Server, storage and network utilization approaches 100% Resources are ‘virtualized’ and optimize themselves to meet your service level objectives Administrative and operational overhead is minimized

47 Original HP UDC Components
HP World 2004 Original HP UDC Components HP consulting and integration services Virtual Server Pools Heterogeneous server environments HP servers optimized for UDC Protect your current investments Virtual Network Pools Standards-based VLANs Flexible and robust network infrastructure Virtual Storage Pools HP XP and EVA storage offer flexible ‘network-based’ virtualization Integration with OpenView for storage management EMC Symmetrix Utility Controller Software Manages service templates Integrates with HP software: resource, workload and failure mgt. hp utility data center hp utility controller software processing elements storage elements networking elements hp-xp hp-ux More detailed architectural view of the UDC Note the heterogeneous nature of the solution: UDC can support HP-UX, Solaris, Windows and Linux servers – even AIX with customization and scripting Cisco Catalyst and ProCurve switches – as well as Linux and Windows-based firewalls, cache, proxy, SSL, load-balancers etc. SAN-based storage from HP and EMC is supported today, and NAS solutions in the future. The UDC is a ‘wire-once’ solution and supports both fiber and copper-based data center wiring Utility Controller and Rack – comprises controller hardware and software – we’ll look at the controller software in more detail shortly UDC integrates most effectively into an OpenView management environment The key to a successful UDC implementation is a focus on data center processes and skills training, as well as technology. HP offers a variety of data center services offerings – Business Value Analysis, service level management, capacity management and cost management services to ensure that the UDC is implemented in the most cost-effective fashion – and delivers on the promised benefits. OpenVMS Procurve eva windows Cisco emc linux utility fabric OpenView

48 UDC Plan: Improve Asset Utilization
HP World 2004 UDC Plan: Improve Asset Utilization Adaptive management enabling virtual provisioning of app environments to optimize assets internet internet network virtualization switching pool Wire it up just once All network, storage, and server components were to be wired once Virtualize asset pool All components could be allocated and reallocated on the fly Easy to reconfigure simple user interface allowed admins to architect and activate new systems using available resources Looked good on PPT Slides, but little customer buy-in load balancer pool firewall pool server virtualization utility controller server pool NAS pool storage virtualization switching pool storage pool

49 Thank you very much for your time and attention!
Many of the topics I have mentioned briefly have been covered in detail by HP in recent announcements. If you have specific questions, I can be reached by at

50 HP World 2004

Download ppt "Beyond Marvel & Superdome: HP’s Next-Generation Server"

Similar presentations

Ads by Google