Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presented by Terry C. Shannon to HP Marketing & BeLux User Group 07-12-04, Shannon Knows HPC Publisher, Shannon Knows HPC

Similar presentations


Presentation on theme: "Presented by Terry C. Shannon to HP Marketing & BeLux User Group 07-12-04, Shannon Knows HPC Publisher, Shannon Knows HPC"— Presentation transcript:

1 Presented by Terry C. Shannon to HP Marketing & BeLux User Group 07-12-04, Shannon Knows HPC Publisher, Shannon Knows HPC www.shannonknowshpc.comterry@shannonknowshpc.com Presented by Terry C. Shannon to HP Marketing & BeLux User Group 07-12-04, Shannon Knows HPC Publisher, Shannon Knows HPC www.shannonknowshpc.comterry@shannonknowshpc.com Beyond Marvel & Superdome: HP’s Next-Generation Server

2 2 The Fine Print  The following text reflects legal requirements –I am not nor ever have been an HP or Intel Employee –I am not being compensated to deliver a marketing pitch –This presentation reflects my opinion, not HP’s opinion –No NDA material is contained in this presentation –I strive for accuracy, but offer no guarantees –Trust but verify: always get a second opinion –Make no purchasing decisions based on this session –Always check with HP before planning your purchases –Above all, enjoy the presentation –Please excuse my “American English” –Comments? Email terry@shannonknowshpc.com terry@shannonknowshpc.com

3 3 Session Agenda   Where HP is today –Current Superdome, PA-RISC and IPF Technology –Anticipated Improvements in the next several years  What HP must do to remain competitive in ~2008   Gotchas… –HP can’t reinvent the wheel –Critical technology rules of thumb –Design criteria  Getting from 2004 to 2008  SKHPC’s speculation about the HP server of 2008   And some thoughts on what to do with all that power!

4 4 Future Server, Current Progeny  Today’s BCS enterprise systems portend the future –Alpha –Superdome –Integrity  Superdome in more detail  Superdome in the next three years  Superdome’s role in the future server  And some things you can do with all that performance!

5 5 HP Superdome Family… Today 16-way 2 to 16 CPUs 32 to 128 DIMM slots 24 to 48 PCI slots 1 to 4 nPartitions 32-way 4 to 32 CPUs 32 to 256 DIMM slots 24 to 48 PCI slots 1 to 4 nPartitions 64-way 8 to 64 CPUs 64 to 512 DIMM slots 48 to 96 PCI slots 1 to 8 nPartitions 64-way with Expanded I/O 8 to 64 CPUs 64 to 512 DIMM slots 48 to 192 PCI slots 1 to 16 nPartitions

6 6 HP Superdome: Built for the Long Haul 2H 2001Mid- 20021H 2004 PA-8800 128 CPUs 1 TB RAM hp-ux online replacement and deletion of cell boards PA-8700+ 875 MHz 64 CPUs 256GB RAM new I/O cards Itanium®* 64 CPUs 512 GB RAM hp-ux, Windows, Linux PCI-x online addition of cell boards cell local memory PA-8700 98 percent performance increase to 389k TPC-C Today Itanium 128 CPUs 1 TB RAM HP-UX. Windows, Linux, VMS in the future Soon *based on the Itanium processor available at that point in time

7 7 Superdome system view Blowers Cell Backplane I/O Fans I/O Chassis Power Supplies Leveling feet Blowers Backplane Power Board Cables Utilities I/O Chassis (2/4) PDCA Cable Groomer

8 8 An Inside Look at Superdome PA-RISC CELL BOARD IPF CELL BOARD

9 9 Superdome Cabinet Cell Backplane Page 9 SD Interconnect Fabric: Crossbar Mesh  Fully-connected crossbar mesh –four crossbars –four cells per crossbar  All links have equal bandwidth and latency –minimizes latency –maximizes usable bandwidth  Implements point-to-point packet filtering and routing network –allows hardware isolation of all faults  interconnect 16 cells with 3 latency domains –cell local ~200ns –crossbar local ~300ns –remote crossbar ~350ns Crossbar I/O

10 10 Chipsets Count in Cellular Architectures Memory HP sx1000 chipset cell controller Crossbar interconnect switch Memory I/O card cage—12 PCI-X slots 1.5 GHz Intel Itanium 2 processors chipset cell controller Cell 1 To cell 2 To cell 3 To cell 4 To cell 5 To cell 6 To cell 7 Cell 8 To cell 9 To cell 10 To cell 11 To cell 12 To cell 13 To cell 14 To cell 15 To cell 16 Cell 1 Cell 8 I/O I/O card cage—12 PCI-X slots 1.5 GHz Intel Itanium 2 processors

11 11 Inside Superdome: New sx1000 chipset I/OBackplane E IPF Cell Controller V2 Processor Dependent Hardware Crossbar PCI I/O Controller MEMORY Cell Board Cross Bar E …..... IPF Six Slots Supports PA 8800, 8900, and all Itanium CPUs Cell controller, cell board, and CPUs change (orange) Memory DIMMS, I/O connection, and crossbar connection remain the same All other system infrastructure (frame, backplane, I/O chassis, etc.) is preserved Optional PCI-X I/O slots can also be used PCI EE ….....

12 12 The role of the sx1000 in Superdome  The heart of all HP cellular systems, linking –CPUs –Memory –I/O –Crossbar interconnect to other cell boards  One chipset per cell board –4 CPUs per cell board –Sx1000 supports 8 CPUs per cell board –Enables 128-way Superdome systems

13 13 HP Delivers Dual-Core Before Intel  PA-RISC PA-8800 –2 cores, one chip –Each core has own L1 cache –32M shared L2 cache –Uses high-bandwidth Itanium system bus –Uses same socket and HP chipsets as Itanium 2  Itanium – HP invents a double-density Madison –Hondo project – now known as mx2 –Two Madison CPUs and HP technology – Results – 2 Itanium processors in a single socket  Count sockets, not CPUs

14 14 How Did HP Upstage Intel?  Itanium is a joint venture between Intel and HP  HP co-owns certain Itanium intellectual property –HP’s High Performance Systems Lab is truly brilliant –Richardson, TX: If it’s been invented, then make it better –Investigate, innovate, and improve to achieve leadership  How HP’s 3-I approach rendered “Hondo” the mx2 –Itanium cartridge packaging was not at maximum density –CPU packaged on a carrier board, power at one end –The Intel CPU and package could occupy less space –HP designed a cache, controller and carrier board –HP placed the power supply atop the module and…

15 15 Field-stripping an mx2 module  Mixing is allowed between Madison, HP mx2 module, Madison9M Industry-standard Itanium2 power pod (DC to DC power conversion) Intel CPU chip inside package Intel CPU “carrier” board HP CPU “carrier” board HP external cache and controller chip HP power solution (goes on top rather than on the end) on the end)

16 16 HP Integrity Cellular System Progress 20022003 Late 2005 scalable (cell-based) servers Montecito Dual Core Madison 1.5 GHz rx7620 Itanium 2 McKinley 1 GHz Superdome rx8620 2004 Madison 9M 1.6 GHz 4-way 64-way 16-way 8-way 128-way 32-way 16-way Focus is on Cellular Systems, CPU Count Reflects mx2 CPU

17 17 Superdome Performance Projections Estimated System OLTP performance Integrity Enhancements:  CPUs follow Intel roadmaps  Dual CPU motherboard doubles IPF count in ‘04  Mixed Madison CPUs supported  In-cab interconnect upgrades to sustain scalability  In-cab upgrades from PA-RISC Superdome performance modeled in example. Estimated Relative Perf 20032004 20052006

18 18 How Superdome will maintain its leadership in the next 3-4 years  Double overall system performance  Increase system availability  Eliminate single points of failure  Increase reliability  Add resiliency features  Enhance system scalability  More memory  Improved performance scaling  Improve system manageability

19 19 IPF and PA-RISC processor roadmap 200120022003 2005 2004 PA-8700+ 875 MHz PA-8900 Dual Core Itanium 800 MHz Itanium 2 1.0 GHz PA-8700 750 MHz Madison 1.5 GHz HP mx2 Dual CPU Montecito Dual Core Madison “9M” PA-8800 Dual Core This roadmap is subject to change without notice. Dual Core means two CPU cores per silicon chip. Dual CPU means two packaged CPUs per processor module. PA-RISC Family Itanium Family Itanium Family (HP Exclusive) Montvale Dual Core 2006

20 20 Future “Arches” chipset vs existing sx1000 Crossbar BXBX PCI-X System Bus Adapter … PCI-X or PCI PA-8800 or Itanium ® 2 MEMORY Cell Controller Cell Board sx1000 chipset BXBX B X = PCI-X host bridge Crossbar Switch Connections PA-8800 or Itanium ® 2 PA-8800 or Itanium ® 2 PA-8800 or Itanium ® 2 CPU bus bandwidth increase memory bandwidth increase Crossbar B X2 PCI-X System Bus Adapter … PCI-X 2.0 PA-8900 or Itanium ® 2 MEMORY Cell Controller Cell Board “Arches” chipset BX2BX2 B X2 = PCI-X 2.0 host bridge Crossbar Switch Connections PA-8900 or Itanium ® 2 PA-8900 or Itanium ® 2 PA-8900 or Itanium ® 2 PCI-X or PCI PCI-X 2.0 crossbar bandwidth increase I/O bandwidth increase 2x slot bandwidth

21 21 Investment Protection Field upgradeable into existing customer systems, no chassis swap Fits within the available power / thermal windows of existing Superdome systems Performance & Scalability  More powerful processors  Bandwidth increase in: –CPU Bus –Memory –I/O –I/O slot –Crossbar  Memory capacity increase  Latency reductions Increased Single System HA Double DRAM sparing (tolerate failure of 2 DRAMs on a DIMM)Double DRAM sparing (tolerate failure of 2 DRAMs on a DIMM)  Multi-path crossbar topology  Link level retry with ‘self-healing’ on crossbar and I/O links  Greater fault protection within the CPU (cache repair-on-the-fly)  Redundant DC/DC converters and system clocks  Less than 15 minute removal and replacement of a FRU for one person after failed FRU is identified and before boot sequence begins “Arches” system summary

22 22 same frame Three generations of Superdome… Yosemite 4Q 2000 sx1000 4Q 2003 Arches 4Q 2005 PA-RISC Only 64 CPU, 256GB Memory nPars PCI OL* HP-UX Only IPF and PA-RISC 128 CPU, 1TB Memory PCI-X (133) HP-UX, Windows, Linux and OpenVMS IPF and PA-RISC 128 CPU, 4 TB Memory DDR-II Memory, PCI-X (266) Increased BW for Mem, IO, Xbar Enhanced Resiliency, SSHA HP-UX, Windows, Linux and OpenVMS same frame

23 23 Performance boosts with in-box upgrades Win HP-UX Estimate Over 2M

24 24 N+1 features (hot swappable) 1. Cabinet blowers, I/O fans 2. Bulk dc power supplies 3. All cell, backplane, I/O power converters 4. Redundant ac input power (optional) 5. Multiple xbar switch fabrics 6. Redundant global clock source Error correction 1. Ecc on CPU cache & FSB. Cache repair on the fly 2. Ecc protected PCI-X 2.0 3. Link level retry with “self- healing” xbar & i/o pathways 4. Ecc on all fabric and memory paths 5. Double chip-spare memory Diagnostic features: 1. Test station - asic level scan tools - remotely accessible via lan 2. Enhanced predictive support 3. High availability observatory 4. Ems monitoring system 5. Dynamic processor resilience (DPR) 6. Dynamic memory resilience (DMR) 7.Hot-swap service processor Online addition, replacement: 1. cell assemblies* 2. PCI i/o cards * note: os version dependent Fault isolation technologies Keep it running Fix it fast Increase single system high-availability

25 25 Enhance system scalability   Dual-core processors double CPU count per “socket” – – Hondo today. Montecito end of 2005   Hyper-threading may allow more scheduling options   4x increase in memory capacity   Across-the-board latency improvements reduce and “flatten” access time profile for large partitions   OS and application improvements inevitable

26 26 Improve system manageability  Integrate existing rich feature set into Web-based server management framework (Insight Manager)  Add capability to update firmware without shutting down the OS (new firmware activated on next boot)  Add I/O card slot doorbells and slot power latches  Enhance Security features  Secure Shell (SSH) management console  Authenticated firmware update  Trusted Platform Monitor (TPM) device to store encryption keys

27 27 Superdome: built for the future, investment protection today 1H 20042H 2004 2H 2006 128-way scalability with PA-8800 and mx2 CPUs Madison 9M HP-UX, Windows, Linux, OpenVMS 2H 2007 calendar year 128 CPUs (2 64-CPU partitions) PA-8800 CPU Mx2 CPU 1 TB RAM HP-UX PCI-X Next Generation High-End System IPF only (no PA) Leadership Performance Leadership HA Leadership Partitioning Upgrade to Montvale (performance boost) Up to 4 TB memory PCI Express HP-UX, Windows, Linux, OpenVMS 2H 2005 Arches chip set with higher bandwidths and lower latencies 128 CPUs Montecito and PA-8900 Up to 2 TB memory HP-UX, Windows, Linux, OpenVMS

28 28 And Now It’s Time for Something New…   Alpha, PA-RISC, and MIPS will run out of gas in the 2007- 2008 timeframe and become obsolescent also-rans.   The NED already has plans for a future system in 2005   HP’s High Performance Group must follow suit   The Alpha Abdication killed Compaq’s next-generation “Avalanche” and “Snowball” high performance efforts.   So the ball is in HP’s court now…   Reliance on Itanium technology takes processor performance out of the picture.   Hence, HP will differentiate above the CPU level…

29 29 Itanium, Integrity, and the Future   64-bit computing imposes new demands – –Performance – –Memory – –Scalability – –Reliability   HP Integrity systems meet these demands – –Today – –And even more so in 2008

30 30 Integrity Matters… Omit Nothing, Improve Everything!  Keep the system running 24 x 7 –Redundancy, N+x components –Hot-swappable fans, blowers, –Power supplies, backplanes, etc.  Emphasize error correction –CPU cache, chipkill –Parity protection for CPUs and I/O –ECC for all fabric and I/O paths –Redundant memory and A/C power  If it breaks, fix it fast –Diagnostics –Fault Isolation

31 31 Focusing Innovation Beyond the CPU PA-RISCAlphaMIPSItanium R&D Investment IA 32 Timeline beyond-the-box R&D investment

32 32 Future Design Considerations  Standardize on Itanium  Utilize standard fabrics and interconnects  Leverage component scalability to yield truly balanced systems  Incorporate latest standard technologies to cut costs  Ensure Customer investment protection  Deliver value added differentiatiation I/O Chassis Host I/O Controller Mem Controller System Interconnect CPUCPU….

33 33 Gating Factors in Future System Design  Key customer concerns –Economics –Price and price/performance –Low TCO –Investment protection –Interoperability –Reliability –Scalability to handle any workload –Manifestation of AE strategy –Grid and AE enabled –Scale out vs scale up issues

34 34 Gating factors in future server design  Management issues  HP must provide an attractive alternative to competitive management products and plans –IBM’s Eliza and e-Services offerings –Sun’s nascent N1 strategy  HP’s alternatives include  OpenView and SIM  Virtualization technology  AE and UDC management tools  And more to come, from HP Labs and acquisitions

35 35 Gating factors in future server design  Rely on standards and existing parts where possible  Heterogeneous OS is critical to server consolidation  Essential to AE plans  Most customers run multiple OSes and platforms  HP must accommodate –UNIX –Linux –OpenVMS –NSK –Windows

36 36 HP’s future server game plan  Note: the following information is based on SKHPC’s analysis of current products, announced roadmaps, and public data. As such, this material is conjectural. For now…  HP’s Goal: Design and Develop The Mother of all Enterprise Servers  SKHPC assessment of product development and attributes –Produced by ~50-person IPF development section as part of the High Performance Systems Lab (HPSL) in Hardware Systems Technology Division (HSTD). Distributed in SHR, MRO, Colorado, Richardson –Two teams are addressing specific issues

37 37 HP’s future server game plan, ctd..  The two teams –Advanced development team focused on roadmap linkage, definition, architecture, topology, performance work, as well as base technology assessment and proofs of concept –An implementation team to deliver an SPU (system processing unit) consistent with current HSTD development practices including boards, signal integrity, mechanical, power, thermal, utilities, etc. The team will leverage existing HP ASICs, CPUs, diagnostics, I/O options, designs and components wherever possible.

38 38 HP Superdome successor, 2007-2008  SKHPC’s vision –CPU counts to >128 processors which allows HP to deliver SC-like systems in a box partitioned all the way down to individual processes using the continuum of partitioning technologies –An order of magnitude improvement in availability with proactive diagnosis tools that anticipate possible failures and automatically correct through re-configuration or early replacement of failing components –Incredibly large cache and memory sizes allow you to run almost any app in memory at the fastest possible speed –Easily clustered, interoperable, and a key HP AE component (Built on inexpensive industry standard components and able to run all HP OSes and the full set of industry applications)

39 39 Aims and goals of “Son of Superdome”  Strategy: innovate and expand existing technology  Results: A next-generation system whose design and appearance is Superdome-centric with Alpha attributes  Likely physical characteristics: –Appearance: Superdome-like but much larger –Design: Superdome-centric with Alpha attributes  Results and Goals: –Superdome successor with substantially higher performance and scalability than the incumbent –Deliver the first, true, open-systems alternative to proprietary mainframe and parallel cluster technology. And, a new ability to open a large can of industry-standard whoop-ass on IBM.

40 40 More aims and goals..  Power and fault-resilience and more –Power up once, no reboot, non-NUMA architecture –Passive backplane, no performance compromise –Online memory, CPU, I/O, CPU, fabric replacement –Fault-tolerant fabric, smaller cell boards –No multipartition single point of failure –Low latency, multi-OS support –Dynamic workload balancing –Change hard partitions on the fly –Grow/shrink, add/delete partitions, firewall security –Reconfigure only by trusted firmware –No denial of service attacks… and more

41 41 Managing the server of the future  How do you manage what does not yet exist? –Most of the tools and building blocks already exist –HP can run a UDC today –The Adaptive Enterprise suite is being fleshed out –OpenView is being extended –HP’s partition magicians have a big bag of tricks today  Probable management environment –Take the OpenView foundation available now –Add partitioning, COD, and AE tools available in 2007-08 –The results: the next-generation system “control panel”  Thanks, and have fun visualizing the future!

42 42 The magic that makes it happen  Virtualization… the enabler for AE and UDC  HP Virtualization continuum  AE  UDC… now downsized for BladeServers

43 43 A Few Words About AE and UDC   HP’s Adaptive Enterprise is a Key Differentiator – –AE is an adaptable standards-based IT framework that lets users automatically and dynamically allocate and reallocate their IT infrastructures in response to changing patterns of utilization and changing business, while eliminating the need to “overprovision” via buying standby equipment for “spikes”. It’s a work in progress.   HP’s Utility Data Center went beyond AE, and was to delivers IT resources on demand.   HP’s UDC is reality at HP today (3 locations), and is far more comprehensive than rival “IT as a Utility” efforts from Sun (N1) and IBM’s e-Services offerings. It’s also far too complex.   HP has downsized UDC to BladeServers to meet user needs   All offerings rely on virtualization and advanced mgmt tools.   Last week: Virtualization agreement with VERITAS

44 44 A Quick Look at Virtualization Integrated virtualization (Adaptive Enterprise) Complete IT Utility (Utility Data Center) Element virtualization Virtualization Innovation Today and Tomorrow Storage NetworkSoftware Servers An approach to IT that pools and shares resources so utilization is optimized and supply automatically meets demand Strategic importance Business value

45 45 HP Virtualization Solutions Element Virtualization   New: Pay-per-use (PPU) now available for imaging and printing and HP Integrity Superdome   New: The family of HP Serviceguard high-availability clustering software adds several new offerings   Enhanced: HP-UX Workload Manager (WLM), the intelligent policy engine for Virtual Server Environment Integrated Virtualization   New: Blade PC powers the HP Consolidated Client Infrastructure   Enhanced: BEA and Oracle integrated with HP Virtual Server Environment   New: ServiceGuard extension virtualizes resources across data centers up to 100kms apart with a single Oracle RAC database   New: iCOD for Blades activate servers as needed by customer Complete IT Utility (Now Blade-Based)   New: Tiered Messaging on Demand, part of the HP On Demand Solutions portfolio   New: Automated Service Usage, part of HP Integrated Service Management Storage NetworkSoftware Servers

46 46 What WAS HP’s UDC? Too Complex! HP UTILITY DATA CENTER - virtualized pools of resource for instant ignition - failover protection and data replication to protect servers, storage and network - wire-once fabric - utility controller software for service definition and creation New applications and systems can be ignited within minutes Server, storage and network utilization approaches 100% Resources are ‘virtualized’ and optimize themselves to meet your service level objectives Administrative and operational overhead is minimized storage virtualization network virtualization internet server pool storage pool NAS pool load balancer pool firewall pool switching pool utility controller server virtualization

47 47 Original HP UDC Components  Virtual Server Pools –Heterogeneous server environments –HP servers optimized for UDC –Protect your current investments  Virtual Network Pools –Standards-based VLANs –Flexible and robust network infrastructure  Virtual Storage Pools –HP XP and EVA storage offer flexible ‘network-based’ virtualization –Integration with OpenView for storage management –EMC Symmetrix  Utility Controller Software –Manages service templates –Integrates with HP software: resource, workload and failure mgt. OpenView processing elements storage elements hp-ux OpenVMS windows linux utility fabric networking elements emc hp utility data center hp utility controller software HP consulting and integration services Cisco Procurve eva hp-xp

48 48 UDC Plan: Improve Asset Utilization storage virtualization network virtualization switching pool utility controller server virtualization internet load balancer pool firewall pool NAS pool server pool storage pool switching pool  Wire it up just once –All network, storage, and server components were to be wired once  Virtualize asset pool –All components could be allocated and reallocated on the fly  Easy to reconfigure –simple user interface allowed admins to architect and activate new systems using available resources  Looked good on PPT Slides, but little customer buy-in Adaptive management enabling virtual provisioning of app environments to optimize assets

49 49 Thank you very much for your time and attention!  Many of the topics I have mentioned briefly have been covered in detail by HP in recent announcements.  If you have specific questions, I can be reached by email at terry@shannonknowshpc.com.

50


Download ppt "Presented by Terry C. Shannon to HP Marketing & BeLux User Group 07-12-04, Shannon Knows HPC Publisher, Shannon Knows HPC"

Similar presentations


Ads by Google