Presentation is loading. Please wait.

Presentation is loading. Please wait.

HP 3PAR StoreServ 20000 Storage The Enterprise Flash Array for Virtual and Cloud Data Centers Accelerate Business. Dominate Competitors.

Similar presentations


Presentation on theme: "HP 3PAR StoreServ 20000 Storage The Enterprise Flash Array for Virtual and Cloud Data Centers Accelerate Business. Dominate Competitors."— Presentation transcript:

1 HP 3PAR StoreServ Storage The Enterprise Flash Array for Virtual and Cloud Data Centers Accelerate Business. Dominate Competitors.

2 Confidential Disclosure Agreement
Center for Sales & Marketing Excellence Training Express Webinars Confidential Disclosure Agreement The information contained in this presentation is proprietary to Hewlett-Packard Company and is offered in confidence, subject to the terms and conditions of a binding Confidential Disclosure Agreement (CDA) HP requires customers and partners to have signed a CDA in order to view this training The information contained in this training is HP confidential, but will become HP restricted after June 2, 2015 This information may only be shared verbally with HP-external customers and/or partners under NDA, and only with HP Storage director-level approval. Do not remove any classification labels, warnings or disclaimers on any slide or modify this presentation to change the classification level. Do not remove this slide from the presentation HP does not warrant or represent that it will introduce any product to which the information relates The information contained herein is subject to change without notice HP makes no warranties regarding the accuracy of this information The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein Strict adherence to the HP standards of business conduct regarding this classification level is critical. This presentation is NOT to be used as a ‘leave behind’, and therefore, should not be distributed/released in either hard-copy or electronic format.

3 Today’s storage decisions must consider 2020 and beyond!
2005 2015 2020 40ZB 2010 2017 8.5ZB 2.8ZB 1.2ZB 0.1ZB Data is constantly growing. But the number of people to handle the data, the $$$ to handle it has not. But the environment where customers operate is changing fast. Data center space is a premium and there is a big push to be more eco-friendly - reduce power consumption. New offerings in the industry need to address this. (1) IDC Digital Universe Report, (2) Worldwide Internet of Things (IoT) forecast, October (3) IDC Directions 2013: Why the Datacenter of the Future Will Leverage a Converged Infrastructure, March 2013, Matt Eastwood ; (4) IDC Predictions 2012: Competing for 2020, Document , December 2011, Frank Gens; (5) TBD, (6)

4 Traditional Tier-1 Storage can’t keep up!
Complex Flash Supremacy Performance without compromise 500% more transactions per second 90% fewer disks 10X Budget 20201 Will you have? 10X People Handling this kind of data explosion can be done with both traditional storage architecture as well as a flash-first architecture. Traditional architectures however are not equipped or architected to handle this kind of capacity growth at great performance levels. Using these complex architectures would mean many more people to handle and manage the systems, a lot more data center space, a lot less eco-friendly setup and most importantly a lot more money to be invested. Do customers have all these as they move towards leaner meaner ecofriendly corporations? The resounding answer is NO. To scale high performance but at the same time remain dense and optimally performing is a challenge that can be fulfilled only by flash. With 3PAR flash not only are you able to deliver 5 times more transactions per second but with also 90% fewer disks. This, we believe is the secret sauce to addressing the challenge we saw in the previous slide. 10X Data center Space & power 4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Confidential

5 Traditional Storage: Living on borrowed time… for too long
17 September 2018 Traditional Storage: Living on borrowed time… for too long Consider these facts High-end HDD/Tiered Solution HP All-Flash Solution 10 X More Speed Less Space Less Power & Cooling What is the proof point for stating that flash can solve the problem we stated? Consider this. On the left side of the screen you see a total of 144 drives that can deliver performance in a tiered setup for a traditional high-end platform. The same performance and capacity can now today be delivered with the use of only six SSD drives. These drives not only give greater backend performance than all the 144 drives put together, but it packs in more capacity and burdens the datacenter power and cooling requirement that much lesser. That is, in effect, what you get with flash. HP Confidential

6 The traditional way of thinking must give way!
Traditional Tier-1 3PAR Modern Tier-1 Inefficient: Less than 30-40% utilization Dedicated Drives Deliver high capacity utilization and high performance levels Fine-grained virtualization Flash-Limited: Response time around 5ms Lack of Flash optimization and inconsistent performance when reading or writing from disk Optimal efficiency and performance under multi-tenant workloads ASIC driven Thin Deduplication Cut capacity requirements by 75%* More capacity and endurance on flash Rigid: Bolt-on Data Services Chubby storage with Hardware defined QoS Adaptive Sparing Assured QoS for all applications and tenants Priority Optimizatio n 50% Thin savings is as per the “HP 3PAR Get Thin Guarantee” program. Traditional systems require dedicated drives for all storage purposes. This leads to a loss of efficiency. In 3PAR world, with fine grained virtualization and system wide striping, we are able to achieve high capacity utilization, as well as performance at the same time Lack of flash optimizations. Fixed mechanisms of handling cache makes the controller a bottleneck. These systems typically fix the issue with enormous cache sizes instead of approaching this with an optimized approach such as for 3PAR with adaptive reads/writes, and autonomic cache offload Due to a lack of special purpose compute elements, the CPU is burdened with all functions, such as control functions, data functions and other data services. This means that the systems can work well only under certain specific workloads. On the other hand, 3PAR uses a custom ASIC to offload the CPU of all data functions including Thin capabilities there by resulting in optimal efficiency as well as performance under multi-tenant workloads Traditional systems have no way of offering denser, lesser expensive flash. Since they work in discreet silos within the architecture there are no opportunities to do more. So they are stuck with what vendors give them. 3PAR on the other hand has technologies like adaptive sparing that helps maximize the capacity that we offer the customer while at the same time improving the endurance The operating systems these platform use are decades old that, although they claim to improve, at the heart of it, they are all simply legacy. They are inflexible which explains the reason why they cannot offer more versatile offerings in their systems such as granular QoS offerings or compaction that is the need of the newer storage industry. * 2:1 Thin savings, 2:1 Dedupe savings including RAID & sparing overheads (25%)

7 Considerations for Next Gen Storage
Choosing the right architecture High Performance IOPS/BW, Low Latency Continued Cost Decline Raw $/GB, Usable $/GB Tier-1 Availability, Integrity, QoS Higher Density TB/U, PB/Rack Ease of Use & Mgmt. More in less time Converge & Integrate Block, File, Object, Backup… And finally, is the architecture future ready? Ready to take on changes in technology such as NVMe, RDMA, etc.

8 Next Gen 3PAR StoreServ Portfolio
ONE Operating System ONE Interface ONE Feature Set Block, File & Object access When Performance Matters When Scale Matters 7450c/20850 New 7440c/20800 New When Value Matters 7200c/7400c

9 HP 3PAR StoreServ 20000 Hardware Overview

10 HP 3PAR StoreServ 20000 Storage
Enterprise Flash for Next Gen ITaaS enabled datacenter Powered by: HP 3PAR GEN5 Thin Express ASIC Meet growing enterprise scalability requirements 6PB raw, 15PB Usable capacity, 1920 drives, 160 host ports $1.50 Per GB Usable Respond instantly to unpredictable business demands Flash-optimized architecture with over 3.2M <1ms* latency, 75GB/s bandwidth, 12Gb SAS backend >3x The Scale of traditional high-end Consolidate with confidence Enterprise data protection including new Async Streaming and Persistent Checksum Engineered for always on 8 Futureproof with true convergence and data mobility Block, file, and object access. One-click data mobility across all 3PAR StoreServ systems. Converged management via SSMC, RMC, OneView, and OpenStack. Federated Systems *node local K RAID 5 100% random reads 15PB usable capacity through 1024, 3.84TB cMLC SSDs drives using 4:1 compaction ratio and 345, 6TB 7.2K NL SAS drives with 2:1 Thin savings. 4-node with 200TB of raw capacity made up of 52 x 3.84TB cMLC SSDs with OS only at <$1.5/GB (usable) at 4:1 compaction ratio 4PB of usable capacity of VMAX 400K compared to 15PB of usable capacity of times 20850 All-Flash 20800 Converged Flash

11 New: 3.84TB SSD – A game changer enabled by HP 3PAR StoreServ
~140TB/U 3.84 TB SSD 3X better density 20% more capacity per drive with Adaptive Sparing Parity with 10K RPM HDDs ($1.5/GB) Warranty TB/U Usable 3PAR Deduplication 69 1.92 TB SSD 35 920 GB SSD 17 7 4 5 24 x 3.84TB cMLC with 4:1 compaction ratio = 280TB in 2U ~ 140TB/U Dec ‘12 June ‘13 Dec ‘13 June ‘14 Oct ‘14 Jun ‘15

12 Hyper-Scale. Hyper-Dense.
1 System 15PB usable in a system 1920 SFF drives 1 Rack 5.5PB usable in a rack 480 SFF drives 1 Drive Enclosure 280TB usable in a drive enclosure 24 SFF drives 24 x 3.84TB cMLC SSD with 4:1 compaction ratio and 25% RAID+sparing overheads Expansion rack = 20 x 24d SFF enclosures 280 x 20

13 HP 3PAR StoreServ 20000 Storage
HP Enterprise Business Technology and Infrastructure Fundamentals — Day 3 HP 3PAR StoreServ Storage Technical Specifications Specifications 20800 20850 Controller nodes 2, 4, 6, 8 CPU ASIC 2 x 2.5 GHz 6-Core 2 x Gen 5 ASIC 2 x 2.5 GHz 8-Core 16Gb/s FC host ports Optional 10Gb/s iSCSI host ports Optional 10Gb/s FCoE host ports Optional 10Gb/s Ethernet ports Built-in remote copy ports 0 – 160 0 – 80 0 - 48 2 – 8 Total Cache 0.45 TB – 33.8 TB 0.9 TB – 3.6 TB Total Onboard cache 0.45 TB – 1.8 TB Total Flash Cache 0 TB – 32 TB Disk drives 8 – 1920 8 – 1024 Drive types SFF 10K SAS, 15K SAS LFF 7.2K NL SAS SFF MLC, cMLC SSD, FIPS encrypted drives MLC SSD, cMLC SSD, FIPS encrypted drives Maximum raw capacity 6 PiB 4 PiB Maximum usable capacity 15 PiB** 12 PiB** Maximum usable file capacity 256TB Throughput/ IOPS* 75 GB/s Performance (4K random 100% read) 2.5 Million IOPS 3.2 Million IOPS 20800 node – 96GB Control Cache, 128 GB Data Cache 20850 node – 192GB Control Cache, 256 GB Data Cache *Throughput figures represent 256K RAID 1 sequential reads; node specific volume layout ** Using 3PAR data reduction technologies *** 8K RAID 5 100% random reads; node specific volume layout HP Restricted For HP and Channel Partner internal use 13

14 HP 3PAR StoreServ 20000 Software Object Scalability
Specifications Object 20800 Converged Flash 20850 All Flash Base VV 64,000 Total VV 128,000 Max VLUN 256,000 FC 16Gb/s Port Initators 256 iSCSI/FCoE Port Initators FC system initators 8,192 iSCSI/FCoE/System Initators Remote Copy Max VVs – Sync 800 – 2 nodes 2400 – 4+ nodes Remote Copy Max VVs – Async periodic 2,400 – 2nodes 6,000 – 4 nodes Peer Persistence Max VVs 600 Port initators are pre-port failover, when port failover is initiated the port shall support double number of initators Higher volumes requirements driven by VVOLs Parity iSCSI and FCoE with FC

15 HP 3PAR StoreServ 20000 Storage – Key Performance Metrics
Platform Workload Perf Metric 20850 8K 100% Random Read, RAID 1, CPVV, Node Local 3.2M IOPS at <1ms 8K 100% Random Read, RAID 5, CPVV, Node Local >3M IOPS at <1ms 8K 100% Random Write, RAID 1, CPVV, Node Local 1M IOPS at <1ms 256K 100% Seq. Read, RAID 1, CPVV, Node Local 75GB/s 20800 2.5M IOPS at <1ms Configuration based on: 8 nodes per array 48 MLC SSDs per node pair 3 SAS HBAs per node 3 FC HBAs per node All drive enclosures and hosts balanced across all ports Preliminary numbers. Final numbers will come out of NINJAstars on June 1 We expected NINJAstars numbers to be very close to what we are describing here in the following slides.

16 What’s Changed between 10000 and 20000?
HP 3PAR StoreServ 10800 HP 3PAR StoreServ 20800 What you get Bus Architecture PCIe Gen2 PCIe Gen3 Next Gen CPUs 2 x quad-core per node 2 x Hexa-core per node 50% more compute ASIC 2 x Gen4 ASIC 2 x Gen5 ASIC Data Services Block Only File/Block/Object Converged Control cache 32GB per node 96GB per node 3x Data cache 64GB per node 128GB per node 2x I/O slots 9 7 Lower cost FC host ports GB/s or 96 16Gb/s Gb/s 66% more iSCSI host ports GB/s GB/s 2.5x Rack options 2M HP 3PAR rack HP i-Series G3 42U Standard rack Standardized Option 3rd party rack mnt No Yes Flexible Drives 16 – 1920 (4) 8 – 1920 Lower minimum Drive type Magazine – 4 x drives Single drive Lower overheads /drive; serviceability Base rack drives ZERO 144 Higher base rack density Expansion rack 320 drives 480 drives 50% more density Max capacity 3.2 PB 6 PB 180% more Bandwidth 13 GB/s 75 GB/s ~6x; Ideal for data intensive apps like SAP HANA Performance 450,212 SPC-1 IOPS 2.5M 100% RR IOPS HP Restricted For HP and Channel Partner internal use

17 HP 3PAR StoreServ 20000 Hardware Building Blocks
Service Processor Adapters Expansion Drive Enclosures Base Enclosure Drives Rack 4-port 12Gb/s SAS HBA 2.5in 2U 24 SFF SAS 2.5in SFF SAS HDDs/SSDs HP Intelligent Series Rack Node backplane (High speed node interconnect) 4-port 16Gb/s FC HBA Physical Dual power Supply SP 1U 3.5in 2U 12 LFF SAS 2-port 10Gb/s iSCSI/FCoE CNA LFF SAS HDDs/SSDs Customer-supplied rack (4-post, square hole, EIA standard, 19 in., rack from HP or other suppliers) HP 3PAR StoreServ Controller (2-Controllers - can scale up to 8 Controllers) You start with the node chassis which is basically the node enclosure that can take in a minimum of 2 nodes and can go up all the way to 8 nodes. The illustration shows a 8-node enclosure that has 2 nodes in it. The other building block is the 8-way backplane. This provides the passive, low latency mesh-active cluster interconnect between the participating nodes in the storage array. Once the nodes are in place, the next step is to add adapters. The backend connectivity is done through 12Gb/s SAS Adapters. These are 4 port adapters and they connect to the drive enclosures to form the backend connectivity. The host connectivity options include, 4 port 16Gb/s FC adapter, 10Gb 2-port CNA for iSCSI/FCoE connectivity and 10Gb Ethernet port for File Persona The storage options begin with configuring the drive enclosures. There is a SFF drive enclosure and a LFF drive enclosure that house the drive of the respective form factors. Both the drive enclosures are 2U in form. The drives themselves come in both SFF and LFF form factors. These are single drive carrier options that slot into the drive enclosures. We talked about racks earlier. This array can be factory installed on HP G3 racks or field installed on to custom supplied racks as long as they fulfil certain standard requirements. A physical service processor is also necessary on all the arrays. This is a refreshed product from what we currently have on the existing 3PAR arrays. It will be 1U DL120 Gen 9 server with redundant power supplies. 2-port 10Gb/s NIC Card

18 HP 3PAR StoreServ 20000 System Architecture
Same as 3PAR StoreServ 2 ASICs per node 2 CPUs per node Mesh active passive backplane Maximum rack separation of 100m 20000 Full Mesh Backplane Low Latency/ High Bandwidth 8-node Backplane Bandwidth 224GB/s (4.0 GB/s ASIC to ASIC) Completely Passive Host Ports Cache Disk Ports 20850 with 8 nodes HP 3PAR Maximum Configuration 8 x Controller Nodes 16 x Gen5 ASICs (2 per Node) 16 x Intel® Octa- Core CPUs 1536 GB Control Cache 2048 GB Data Cache HP Confidential

19 HP 3PAR StoreServ 20000 Controller Node
Powerful Controller Node 2 x Intel Ivy Bridge-EN 2-socket 8/6- core Processors 2 x Gen5 ASICs 448 GB Maximum Cache 16Gb/s FC 10Gb/s FCoE/iSCSI or 10Gb/s NIC Internal Dual SSD boot drives (256GB and 512GB) On-board 10GbE Remote Copy Port 1 Service port 1 Management port Control Cache 96 or 192GB Intel Multi-Core Processor Intel Multi-Core Processor Data Cache 128 or 256GB Control (SCSI Command Path) 3PAR Gen5 ASIC 3PAR Gen5 ASIC Data Path PCIe Switch PCIe Switch The controller nodes themselves come with nearly 3 times the cache as the earlier array. The node comes with a 6-core Intel Ivy Bridge processor and the node comes with a 8-core Intel Ivy Bridge processor. The control functions are handled by the processors. Each node has 2 x Gen 5 ASICs for executing all the data functions. The nodes also come with facility to add SAS backend ports or host ports using any of 7 PCIe slots available on the node. The nodes also have SSD boot drives. The key change from the nodes is that, on the we have 2 boot drives instead of one for redundancy. We have a 256GB SSD drive on the node and a 512GB SSD drive on the node. In addition to these, the node also features a 10Gb RCIP port, 1 service port for console access and one management port – which is the Management interface for the CLI and GUI (MC/SSMC).  HP Restricted For HP and Channel Partner internal use 19

20 Backplane Enhancement for Node Rescue
Ethernet interfaces that are connected through the backplane between nodes Helps perform Node Rescue from another node without the need for SP – automatic or manual (CLI) Node 4 Node 5 Benefits: Reliable and minimal service intervention Does not require the use of customer’s network Simpler setup, lower node downtime Node 2 Node 3 Node 0 Node 1 A key change that has been done to the backplane that greatly improves serviceability needs to be mentioned here. On the backplane, Ethernet interfaces have been added so that the nodes can connected through the network backplane. Today node rescue requires the involvement of the Service Processor to hold the OS bits and for the SP to be available on the customer network so that the failed node can be rescued with it. This particular enhancement removed that need. With this change, a failed node can be rescued from the OS instance on another node. This node rescue can be done automatic or manual based on the requirements. The benefits are very real – It enables services to perform the node rescue without bringing another device into the setup making the rescue operation more reliable. This also does not require the SP to come up on the customer’s network that customers do not always like. The setup is simpler now because the interfaces are already in place and since it can be done automatically, the downtime is minimal. HP Restricted For HP and Channel Partner internal use 20

21 HP 3PAR Gen5 Thin Express ASIC
9/17/2018 5:08 AM HP 3PAR Gen5 Thin Express ASIC Thin & SHA256 based Dedupe Built In Zero & Duplicate Block Detection for Block & File HP 3PAR Gen5 Thin Express ASIC Fast RAID 5/6 and Rapid RAID Rebuild Integrated XOR Engine Tightly-Coupled Cluster 2x inter-ASIC Bandwidth, Low Latency Interconnect achieves write latency <200us 5 Mixed Workload Independent Metadata and Data Processing Data Integrity T10 PI – Persistent Checksums The ASIC is in charge of performing all data operations in a 3PAR storage array. This includes but not limited to RAID calculations, RAID rebuild operations, Zero-detect, Thin Deduplication. Additionally the ASIC is the chief data mover in the mesh-active cluster that we talked about. It moves the data between the nodes to fulfil IO requests. ASIC by offloading all data functions from the CPU also helps provide mixed workloads performance in a consistent fashion and is a key element in offering robust multi-tenancy on the 3Par arrays The ASIC also adds protection information to the data that it handles that help the data maintain its integrity throughout its presence within the storage array and outside it as well. C:\Documents and Settings\jtchin\Local Settings\Temp\cache\OLK30\3PAR IPO Roadshow (Nov ).ppt

22 HP 3PAR Gen5 Thin Express ASIC: Enhancements
9/17/2018 5:08 AM HP 3PAR Gen5 Thin Express ASIC: Enhancements Faster communication across the cluster 4.0 GB/s ASIC-to-ASIC bandwidth in each direction 224 GB/s total backplane bandwidth Higher memory bandwidth 2 memory channels per ASIC (2 ASICs per node) 42GB/s peak data cache bandwidth per node -> 2.5x Gen 4 ASIC Modular, balanced scalability Optimized scaling of Node-to-node links, PCI-e links and memory channels with number of Gen5 ASICs (on a node) PCIe Gen 2 Link at 5Gbps More parallel XOR Raid calculations 14 DMA engines total per Node (7 per ASIC) – 16% more than Gen 4 ASIC HP 3PAR Gen5 Thin Express ASIC ASIC-enabled System Features DeDupe -- SHA256 engine in each DMA engine E2E DIF -- DMA engines can generate/strip/check T10-DIF, 8B-aligned transactions The inter-ASIC bandwidth has doubled to 4GB/s. This means that the overall backplane bandwidth increases to 224GB/s up from 112GB/s on the arrays The data cache bandwidth has increased by 2.5 times to 42GB/s. This allows the ASIC to churn out larger amounts of data in shorter periods of time contributing directly to the backend bandwidth of the array The inter-node connectivity links have been upgrade from PCIe Gen 1 to Gen 2 We have one additional DMA engine per ASIC that helps faster RAID calculations The new ASIC is capable of doing inline deduplication based on SHA256 encryption. This will enable us to perform much more robust deduplication functions. The support for this encryption though is at present not available through the OS The ASIC also extends the T10DIF function of adding protection information to data to the cluster memory as well in addition to the drives C:\Documents and Settings\jtchin\Local Settings\Temp\cache\OLK30\3PAR IPO Roadshow (Nov ).ppt

23 HP 3PAR StoreServ 20000 Controller
17 September 2018 HP 3PAR StoreServ Controller Fron t Rear Nod e LED Indicators Power Supply Battery Backup Unit Built-in ports – 1 Mgmt, 1 10GbE RCIP, 1 Service port Fan Adapters – 12Gb/s SAS for drives, 16G FC, 10G iSCSI/FCoE, 1G NIC for host HP Confidential

24 HP 3PAR StoreServ 20000 Drive Enclosure
17 September 2018 HP 3PAR StoreServ Drive Enclosure High Density Drive Enclosures Two types of drive enclosures 2U, 24 SFF (2.5in) drives 2U, 12 LFF (3.5in) drives 12Gb/s SAS ports for node connectivity Drive Enclosure with Bezel Power supply LFF Drive Enclosure front SFF Drive Enclosure front There are two types of drive enclosures offered on the 3PAR StoreServ array. Both of them are 2U in form factor. One of them can take up to 24 small form factor drives while the other can pack in 12 of the large form factor drives. You may notice that the SFF drive enclosure, in fact, has 25 slots. Since 3PAR always deals with drives in even numbers, we have closed down this slot and it will not be used. When the system ships out of the factory this slot will be sealed so that it cannot be used. Software is also designed not make use of more than 24 drives from a drive enclosure. On the rear, we have two fans for each drive enclosure for cooling and two power units on the right side for redundant power supply. There are two IO modules on the back that is used for connecting to the nodes. 25th slot not to be used Fans HP Confidential

25 HP 3PAR StoreServ 20000 Drive Enclosure
17 September 2018 HP 3PAR StoreServ Drive Enclosure High Density Drive Enclosures Drive enclosures are 2 level deep daisy chained for maximum scalability and density Node 0 2m Cu SAS Cable within base rack or AOC cable (10m, 25m, 100m) for inter-rack Drive enclosure 1 Node 1 Drive enclosure 2 Node = N Drive Enclosure = DE N0 DP1 -> DE1 IOM0 DP1 DE1 IOM0 DP2 -> DE2 IOM0 DP1 N1 DP1 -> DE2 IOM1 DP1 DE2 IOM1 DP2 -> DE1 IOM1DP1 0.5m Cu SAS Cable HP Confidential

26 Base Enclosure with SFF and LFF Drive Enclosures
You can mix and match SFF and LFF Drive Enclosures to meet requirements This Table shows available Drive types for each Enclosure Enclosure Fast Class NL SSD SFF 300GB 15K 600GB 15K 600GB 10K 1.2TB 10K 1.8TB 10K 600GB 15K FE 1.2TB 10K FE 2TB 7.2K 480GB cMLC 1.92TB cMLC 920GB MLC FE 1.92TB cMLC FE 3.84TB cMLC LFF N/A 4TB 7.2K 6TB 7.2K 6TB 7.2K FE 480GB MLC FE – FIPS Encrypted FIPS encrypted and non-FIPS encrypted drives can be mixed within a array if it is not licensed for encryption. An array licensed for encryption can have only FIP encrypted drives.

27 HP 3PAR StoreServ 20000 Ports Interface Ports/Card Counts Max Purpose
16Gb FC Ports 4 0-20 Ports/Node 160 FC Host Connectivity 10GbE iSCSI/FCoE ports (CNA) 2 0-10 Ports/Node 80 iSCSI Host Connectivity 10GbE FBO ports 0-6 Ports/Node 48 Host Connectivity for File Services 12G SAS ports 4-12 Ports/Node Backend Drive Enclosure Connectivity 10GbE RC Port 1 1 Port/Node 8 For RCIP links

28 HP 3PAR StoreServ 20000 Controller
17 September 2018 HP 3PAR StoreServ Controller Ports Scalability of a single node pair Consume all possible data services even on a single node pair of a array: Drive connectivity – maximum of 480 drives 16 FC host ports 5120 port initiators!! FC host connectivity – maximum of 40 ports 16 SAS drive ports = 384 SFF drives iSCSI/FCoE host connectivity – maximum of 20 ports 4 iSCSI/FCoE ports File Persona Ethernet host – maximum of 8 ports 4 FBO NIC ports for File Persona Federation ports – maximum of 4 ports RCFC ports – maximum of 8 ports Not many people are very impressed about connectivity options. Connectivity is important when it comes to scaling. This scaling may be in terms of number of hosts you may connect to, or it could be number of data services you consume or it could be number of peer arrays that you need to connect with for various purposes. This slide illustrates how a single node pair of a array is capable of consuming all possible data services a array can offer and at the same time connect to a large number of hosts. 4 RCFC ports for replication 2 Peer and 2 host ports for federation HP Confidential

29 New HP 3PAR Service Processor
New physical SP based on DL120 Gen 9 Server Single socket – 1.6GHz 6-Core Intel Xeon E5-2603v3 4GB RAM 500 GB SATA HDD Dual Power Supplies for and 20000 Single Power supply for 7000 However, if specifically required, can order dual power supply option through exception 2 Ethernet ports that support iLO Console access through iLO port and NOT physical serial port !! No physical serial port !! Process to access DL120 console through iLO port: Connect to SP, Ethernet port (using crossover cable) Configure laptop network to be in same subnet as SP private IP address (This is a fixed address setup in HP factory) Use iLO interface to manage the SP (ssh) Telnet is available Available on 7000, and from 8/27 SP OS version – 4.4 with 3.2.2 SP will support upgrades from 3.1.3MU1 – 3.2.1MU3 Specific Ethernet port number is still being decided IP address details as yet unavailable.

30 HP 3PAR StoreServ 20000 Rack 42U, HP Standard rack
Locking front and back doors 100% cabling in rear Enhanced serviceability Front to back cooling 20000 also supported in 3rd party racks; even the 8-node systems New The array will come supported on a 42U HP i-Series G3 rack. What is new here is that the array is also available for third party rack mounting. If you remember this was not supported on the With some creative thinking, it has been made possible now on the arrays. A pally with holders attached secure the node enclosure on it. This is shipped to the customer. At the customer site, the pally can be moved to the customer’s rack and the node enclosure can be slid into the rack to mount it. All of the cabling – power and IO are at the rear of the array. The black cables are the copper SAS ones and the blue cables are the inter-rack Active Optical Cables. Blue SAS Active Optical Cables for inter-rack connectivity Black SAS Copper Cables for intra-rack connectivity Redundant PDUs (4) HP Restricted For HP and Channel Partner internal use 30

31 HP 3PAR StoreServ 20000 8-node Racking
Base rack of 8-node (20800/20850) contains: 8-way backplane 2,4,6 or 8 nodes Service Processor Up to 6 drive enclosures 4 PDUs with power redundancy Cable tray Power sockets, PDM sticks and all power cabling on right side of rack All IO cabling including backend and host connectivity routed through left side of rack 2U Cable Management Tray Drive Enclosures PDMs and power cables routed on right side of rack PDUs Space for chassis labels and PDU serviceability Controller Nodes Space for Controller Node serviceability HP Restricted For HP and Channel Partner internal use 31

32 HP 3PAR StoreServ 20000 Expansion Racking
Monolithic PDUs selected for expansion racks only, breaker boxes interfere with node chassis power supply removal Power input cables are in center of rack to adapt to data centers with overhead or floor inputs Power cabling is labeled with label kits Power input cables are designed to be at the center of the rack so that the power input can be brought in from the top or the bottom as per the customer’s datacenter design. HP Restricted For HP and Channel Partner internal use 32

33 HP 3PAR StoreServ 20800 Fully Configured Array
1920 drives 80 drive enclosures 80 SAS ports or 20 SAS adapters 1 base rack 4 expansion racks 196 Us of rack space Racks can be up to 100m apart The needs 1 base rack and 6 expansion racks – 294 Us of rack space 33% better density Slotting 3 SAS cards on each node takes total SAS ports to 96. To reach 1920 drives, 80 SAS ports are required out of a total of 96 possible SAS ports. Therefore only 80 SAS ports are qualified on this array in this release. This can be achieve through a combination of the totally available 96 ports. More than 80 SAS ports however cannot be supported on the array. Up to 100m

34 HP 3PAR StoreServ 20000 Power Supply
Supports both single phase and 3-phase power Supports redundant power Uses HP standard power distribution units for power delivery 4 PDUs are required per rack Electrical Std Rack Phase SKU Quantity Dimension NEMA Base 1-phase D74 4 1U horizontal mount Expansion H5M58A 0U Half height vertical mount 3-phase AF511A H5M61A IEC B33 H5M68A AF518A 2 1U horizontal mount H5M72A 0U Mid height vertical mount 1U horizontal mount 0U vertical mount – half, mid and full height

35 New Drives 3.84TB cMLC SSD 1.92TB cMLC FIPS Encrypted SSDs
Platform Drive List Price/Drive $/GB Diff 20000 1.92TB cMLC $ 22,470 $ - 3.84TB cMLC $ 39,200 $ -15% 10000 $ 26,483 $ -35% 3.84TB cMLC SSD 35% lower cost than 1.92TB on 10000 2.4x the size of the next largest drive in the market Lower footprint, higher density 1.92TB cMLC FIPS Encrypted SSDs Larger, denser encryption option 1.2TB 10K & 6TB 7.2K FIPS General Availability 3.84TB 7000 – 6/15 20000 – 8/27 10000 – 8/31 1.92TB/1.2TB 10K FIPS 6/15 1.2TB 10K/6TB FIPS 6/30

36 HP 3PAR StoreServ 20000 Configuration Rules
Card Slotting Back-end SAS ports (mini-SAS to Drive Enclosures) Front-end customer host ports (Fiber Channel or Ethernet) Empty slots have blank inserts Slot Number: Slot Load Order: Slot 0 SAS-3 | HBA 5* Slot 1 SAS-2 Slot 2 SAS-1 Slot 3 HBA 4 Slot 4 HBA 2 Slot 5 HBA 1 Slot 6 HBA 3 7 slots are available out of which 2 are for backend and 4 are for host connectivity exclusively. The remaining 1 slot can be used for either of the purposes. The left most slots are used for backend connectivity and right most are for host connectivity. With the array, each node will ship out of the factory with a minimum of 2 SAS cards filled per node. So the 2 mandatory node slots are already filled in. These would be the slots 1 and 2. The recommended ordering of the slots for host connectivity is also shown highlighted in red. The first mandatory host adapter will have to go into slot 5, followed by slot 4 and then slot 6. This will be followed by slot 3 and finally slot 0 if required. Slot 0 can alternatively also be used for a third SAS backend card. This ordering is to ensure that the backend drives and the front end connections are all evenly balanced across all the components within the node. *SAS recommended, optional 5th HBA

37 HP 3PAR StoreServ 20000 Configuration Rules
Drive Slotting Slotted from left to right starting with lowest row Slotted from bottom to top 15 K 15 K Drive types must be even in numbers and slotted next to each other. Minimum of 2 drives are required per drive enclosure 15 K 15 K 15 K 15 K 15 K SS D SS D SS D SS D 15 K Load the drives in order of performance with the fastest drives slotted numerically first. Within same performance tier load drives in order of capacity with the lowest capacity slotted numerically first. Example – 600GB 10K SAS HDDs first followed by 1.2TB 10K SAS HDDs Drives will always be slotted in from left to right and bottom to top as you fill rows up. All drive types in a drive enclosure should be in even numbers and should be slotted next to each other. A minimum of 2 drives are required per enclosure. Empty enclosures cannot be purchased. The drives should be ordered in order of performance with fastest drives first And within the same perf category, the smallest size drives should in first followed by the larger ones. The above mentioned order is for any set of drives being slotted at that point of time. It does not consider the drives that may already be in the enclosure in case of upgrades. For example, if a drive enclosure already has 10K SAS drives and SSDs are being added as upgrades, then the slotting order refers to only the SSDs being added. It does not have to be slotted ahead of the 10K SAS drives which are already in place in the drive enclosure. HP Restricted For HP and Channel Partner internal use 37

38 HP 3PAR StoreServ 20000 Configuration Rules (2)
Minimums & Maximums Component Minimum Maximum Best Practice NL SAS HDDs 12 960 LFF or 1920 SFF Multiples of 12 SAS HDDs 8 1920 Multiples of 8 SSDs 4 1024 Drive Enclosures 2 per form factor 48 – 20850 80 – 20800 Multiples of 4 per form factor (SFF or LFF) Nodes 2 Multiples of 2 Service Processor 1 Mandatory SAS HBA 2 per node 3 per node FC/iSCSI/FCoE HBAs 1 per node 5 per node 10GbE NIC - Adaptive Flash Cache can be configured with a minimum of 2 drives/node pair 2 x 3.84TB = 7.6TB ~ max AFC/node pair!! Smallest configuration – 2 Nodes, 2 Drive Enclosures, 1 Service Processor and 4 SSDs Not available as a default configuration on Watson. Can be built as an exception An upgrade to this configuration will take the total number of drive enclosures to a multiple of 4 HP Restricted For HP and Channel Partner internal use 38

39 HP 3PAR StoreServ 20000 Configuration Rules (3)
Best Practices Always add 4 drive enclosures at a time for HA Cage availability Always maintain a balanced configuration across node pairs in terms of cards, drive enclosures, drives etc. Drive enclosure form factors should be evenly spread across all node pairs When new node pairs are added to an array Add at least half the number of drive enclosures as available under the existing node pairs (subject to minimums) Add at least 60% of the number of drives (based on type and capacity) already present on the existing drive enclosures HP Restricted For HP and Channel Partner internal use 39

40 Raw capacity limits will be expressed in TiB
Starting from the June launch capacity limits calculation will be done in TiB/PiB What’s changing Rationale/Benefit Raw capacity limits will be expressed in TiB/PiB rather than TB/PB 1 PiB = 1024 TiB = GiB 1 TiB = TB Collaterals and the Quoting Tool (Watson/SBW) will be aligned with the Sizing Tool (NinjaSTARS) with regards to raw capacity limits Effect Who is affected The Quoting Tool (Watson/SBW) will show capacities in GiB/TiB and will be aligned with the Sizing Tool All 3PAR models: 7000, 10000, 20000 Model Max capacity (TiB) Max capacity (TB) 20800 6000 TiB 6597 TB 20850 3932 TiB 4323 TB This actually means that our raw capacity limits are higher than what we have shown the world so far. Take the example of 6PB of raw capacity we call out. Since this is 6 PiB this actually means that the raw capacity that we offer is 6597 or 6.6PB.

41 HP 3PAR StoreServ 20000 Software Brief

42 3PAR StoreServ 20000 Software Suites
HP 3PAR StoreServ Software Suites Replication Suite Data Optimization Suite v2 Security Suite Application Suite for Oracle Application Suite for MS SQL Application Suite for MS Exchange Application Suite for MS Hyper-V Optional Integration Solutions: Management Plug-in for MS SCOM, StoreFront VMware vCOPS Integration and Openstack Integration Drive Caps for Software Licensing 20800 – 480 drives 20850 – 320 drives Online Import license Host Explorer Multi Path IO SW VSS Provider Rapid Provisioning Autonomic Groups Autonomic Replication Groups Autonomic Rebalance System Reporter 3PARInfo Full Copy Persistent Cache Persistent Ports SMI-S 3PAR OS Administration Tools (CLI client, SNMP) Web Services API StoreServ Management Console Adaptive Flash Cache Thin Copy Reclamation Thin Provisioning Thin Persistence Thin Conversion Thin Deduplication Access Guard LDAP Support Virtual Copy (VC) Remote Copy (RC) CLX Peer Persistence Virtual Domains Virtual Lock 3PAR Operating System Suite New Dynamic Optimization Adaptive Optimization Priority Optimization Peer Motion New Data Encryption Recovery Manager Central File Persona Suite Policy Manager Storage Plug-in for SAP LVM Two key changes here are the fact that the system reporter will now be bundled as part of the Base OS suite. And secondly, we are adding the Cluster Extension Software which helps provide transparent failover capability for Windows environments as part of the Replication Suite. The will have licensing cap at 480 drives. The however, will have a lower drive cap at 320 drives. These drive limits are subject to change and you will be duly notified if there are changes relating to these drive caps.

43 Ensuring Data Integrity with HP 3PAR Persistent Checksum
3PAR Approach Challenge: Media and transmission errors can be caused by any component in the I/O stack HP Solution: Industry standard end to end data protection via T10 Data Integrity Field (DIF) Benefit: End to End Protection for all data stored on 3PAR StoreServ systems Completely agentless and application agnostic Supported on all next gen HP 3PAR StoreServ systems Rea d Writ e Host HBA Switch 3PAR Front end 3PAR Back end Completely transparent to applications/host OSes T10 DIF is a SCSI standard that adds a layer of data protection to data storage by appending an eight byte record called the Data Integrity Field to each disk sector. Each DIF record contains: 2 byte CRC of the 512 bytes of disk block data 2 byte "application tag" 4 byte "block tag" or "reference tag" which is the lower 32 bits of the disk block address. Without DIF, HP/3PAR systems are vulnerable to silent data corruption within the disk drive: Without DIF, the data received from a successful disk read is assumed to be correct is passed on to the requesting host. DIF provides a method to validate the data returned by the disk drive before sending the data to the host.

44 NEW: Asynchronous Streaming with RPOs measured in seconds
No compromises on performance vs. data protection solution 33% lower min RPO than EMC SRDF/A Site A Site B Data Write 2 1 Network links Data Acknowledgement 3 Simple and intuitive set up Protect against disasters that affect hundreds/thousands of miles In a Flash world Performance measured in sub-milli-seconds Keeping an exact copy decreases performance by 5X Need a solution that keeps a “near-exact” copy without impacting SLA HP 3PAR Async Streaming Remote Copy No compromises on performance vs. data protection Protect against disasters that affect hundreds/thousands of miles Get your business up & running fast with minimal RPO Reduce risk form unpredictable events & disasters

45 HP 3PAR Storage Federation
Run workloads at the right cost and SLA via Peer Motion 7200 7400 Production – Performance SLA Dev & Test Production – Cost SLA 20850 20800 App Acceleration 1 click Traditional Storage Legacy Production With just a single click from the StoreServ Management console, HP 3PAR Peer Motion allows you to create a federation of up to 4 participating 3PAR arrays which can move data bi-directionally online. Over and above this, 4 more arrays, either 3PAR OR non-3PAR could participate in a uni-directional relationship with any of the participating arrays. This can result in a federation of up to 8 systems. This feature is our evolution in the 3PAR federation story. This feature allows you to load balance between multiple 3PAR arrays based on value, scale or performance, whichever is the need for the customer at that point of time. The slide provides an idea of a typical setup where the production arrays could be higher scalable arrays while development and test environment could be lower cost, lower scale arrays. Such a federation is undisturbed even when data from third party array has to be ingested into it as well. 4 x 20800s in a federation can provide 60PB of usable capacity and 10M IOPS. Dev w/QoS Prod w/QoS

46 The Autonomic Experience (SSMC = IMC)
Dashboard: Health, Performance, Capacity at a glance Aggregation of information from up to 16 HP 3PAR StoreServ arrays Search: Instantly find what you want Find objects in seconds with intelligent search that remembers previous queries System Reporter: Historical performance & capacity reports One-click templates to asses system performance health Available only via SSMC Federation: Run workloads at the right cost and SLA One-click data mobility via Storage Federation The entire HP 3PAR StoreServ portfolio managed through one single console (SSMC) SSMC along with OneView, RMC and Openstack provides the industry’s first and truly converged management stack. Note: IMC 4.7 will support HP 3PAR StoreServ and Storage

47 Pricing With the array, we have been able to achieve much lower costs than the platform. This has helped us to lower pricing to a great extent and thereby achieve two things: Bridge the gap between the pricing of our midrange and enterprise arrays Become more competitive in the market space

48 Pricing Compare with EMC VMAX 200K
Config: 1.2TB/10k and 1.92TB SSD): 70/30 capacity OS Suite Replication Suite Data Opt Ste <VMAX200K 512GB Cache up to 366TB TS Pricing not included Compare is for raw capacity and not usable capacity.

49 Pricing Compare with EMC VMAX 200K
Config: 1.92TB SSD OS Suite Replication Suite Data Opt Ste 22% price advantage at 360TB raw TS pricing not included This chart shows the comparison with a configuration that uses only the 1.92TB SSD on our side and 1.6TB SSD on the VMAX side. You can clearly make out the sudden gap that is emerging even with the lower cache option of the VMAX 200K. The difference is steeper if you consider a higher cache option or a VMAX 400K.

50 Pricing Compare with EMC VMAX 200K
Config: 3.84TB SSD OS Suite Replication Suite Data Opt Ste 35% price advantage at 360TB raw Key message: Always lead with flash. It blows away the competition! TS pricing not included

51 Pricing Compare with EMC XtremIO (20TB X-Bricks)
HP sweet-spot price positions Competitive at nearly any scale above 200TB Above 400TB, blows the competition away Considers claimed compaction ratios of both platforms HP discount assumptions: Street--70%; Total—75%; EMC/NetApp discount assumptions: Street—70% Drive mix: HP: 3.84TB SSD only Assumed compaction/overhead: HP—4:1 / 22%; EMC—6:1 / 17% SW incl/mix: OS only The comparison is based not only on capacity points but also performance points. The N system will be compared against 4 X-bricks and the N will be compared against the XtremIO 6 xbricks configuration. So the curve for the does not assume a fixed number of nodes. It changes as we compare at various capacity points. Based on this chart you can see that we are competitive against XtremIO at nearly every price point above the 200TB capacity point. This is relevant since, we will ideally be competing with XtremIO with the 7450 below those capacity points. In the truly scaled enterprise configurations, you can see that totally blows away the competition. So educate your customers about Total cost of ownership and not just the initial purchase price point. The curve shows them what happens to their overall price point when scale over time. It is a powerful story.

52 Preliminary Pricing Compare with 7440c
Just 9-17% premium vs. 7440c What do you get? 5x usable capacity ~3x IOPS 8x bandwidth Virtually no port limitations 20000 Upsell Zone 50.2 100.3 150.5 200.6 250.8 301.0 351.1 401.3 451.4 501.6 N Position vs. 7440 16% 11% 9% 8% 7% 6% N Position vs. 7440 34% 21% 17% 14% 12% 10% Configurations Used: 4 nodes, 30% SSD (1.92TB cMLC SSD) and 70% 10K SAS (1.2TB 10K SAS) OS Suite Replication Suite Data Optimization Software Suite v2

53 Positioning, Availability and Competition

54 HP 3PAR StoreServ 20000 Triggers: The 3 C’s
Workload Capacity 3.2M IOPs, 75GB/s 6PB Raw, 4PB Flash Applications Consolidate Massively Virtualization Best in class Quality of Service Business Transformatio n Next Gen DC E.g. ERP roll out, Private Cloud Latest infr., Density, Power & Cooling Critical Projects File Servers EMC HDS VMAX3, XtremIO 4+Bricks VSP, G1000 Databases Competitio n Massive consolidation – When customers are looking at consolidation as the key reason for a storage purchase, the is the option. Consolidation here means not just consolidating the capacity for multiple applications, it is also about providing the highest level of performance to each participating application and the ability to granularly allocate performance to each of those. With 6PB of raw flash, 3.2 million <1ms latency, 75GB/s bandwidth, the platform fits the bill perfectly. Critical, massive and innovative datacenter IT projects – When we talk to customers, we hear of their new high budget IT projects. About revamping their datacenter or setting up a new next generation data center with high density, eco-friendly low power and cooling requirements. These are optimal positions for the Storage. The second position is when they are looking to expand or refresh hardware for their critical applications. Think of a tech refresh of a platform that is running their core ERP application. Thirdly, positioning will be in terms of competition. Position the storage when EMC positions VMAX3 200K or 400K. Go high on flash to blow them away. Position the when EMC positions XtremIO with 4 Xbricks more. Not only can we offer enough and more performance, we can comfortably beat them on price there. The Storage should also be considered when competing against HDS VSP G1000 in deals. 20850 All-Flash 20800 Converged Flash

55 Availability Timelines
Q3 June Q4 August Q4 September June, 1, 2015 Live CPL 20800, Orderable on Watson August, 27, 2015 Clear to Ramp Factory ready to process orders for shipping and Second half September 2015 Shipments Customers expected to receive shipments starting this timeframe June, 2, 2015 HP Discover Announcement HP 3PAR OS and SSMC 2.2 General Availability

56 Competitive Advantages of 20000 Storage
One software spanning from mid-range to enterprise including flash All flash array with tier-1 resiliency Industry leading density Most scalable all-flash array in the market by a big margin Predictable response time in a multi-tenant environment Super fast – >3.2 Million <1ms latency Complete set of Enterprise class data services Mid-range ease of management on Enterprise class platform No vendor can claim this Not all vendors can perform well in a multi-workload, multi-tenant environment $/IOP would also be a key comparison Fulfilled by the traditional tier-1 vendors but unfulfilled by flash start-ups Fulfilled by the flash start-ups but unfulfilled by traditional tier-1 vendors

57 Traditional Competition: EMC
VMAX3 The big refresh with nothing changed underneath Monster system that is on paper capable of scaling to great extent Proven brand in the traditional tier-1 market; Large set of data services. Large and effective sales force Weaknesse s How wins? More cache, less flash XtremIO Vs VMAX Upgrade nightmare Hypermax OS – following the 3PAR way Performance metrics? Same old data center limitations Behind on the SSD technology. Have 1.6TB MLC No innovation around flash optimization…still Spreadsheet based planning and complex management Common architecture, resiliency and features across portfolio Flash optimized. Flash scalable. Community appreciated continuous flash innovations Super fast – 2.5 Million to 3.2 Million IOPS Ease of use & Converged Management – why XtremIO if VMAX was flash optimized ?! – cannot add engines, drive enclosures, adapters or NAS after initial system is setup – 7 rack system, cannot be dispersed more than 25m 5760 drives to support 4PB usable capacity?!

58 Traditional Competition: HDS
VSP G1000 – Latest refresh of the HDS VSP high-end storage platform Fastest SPC-1 numbers in the planet (2M). Highest read IOPS reported so far (4M) Highly resilient arrays with comprehensive replication capabilities Weaknesse s How wins? Lack of compaction technologies – expensive flash Still behind on SSD density – 2.4 times less than 3PAR Density issues – 192 drive DEs, 6 racks for full config Management complexity Lack of native file services Common architecture and features across portfolio Affordable flash option - $1.5/GB usable Flash optimized. Flash scalable. Community appreciated continuous flash innovations Millions of IOPS performance at compelling $/IOP numbers Ease of use & Converged Management They do not possess any compaction technologies as today. They don’t do inline dedupe or compression which means that their efficiencies are not that great. In the flash world, what this means is that money has to be spent on every flash dollar as such with no increase in effective capacity making flash unaffordable. They lack any native convergence in the form of file, block or object services. Management complexity is also true as for any traditional tier-1 array. 12 x 3.2TB / 24 x 3.84TB = 2.4

59 All Flash Competition: EMC
XtremIO – Purpose built flash and key focus for EMC Top selling all flash platform in the world; Effective install base mining Large and effective salesforce Extremely good performance Ease of use Weaknesse s How wins? More cash, for flash – price points not competitive Questionable resiliency. Shelf failure brings system down Lacks robust Tier-1 data services One more architectural silo in an EMC datacenter Cannot scale. Data controlled in memory. All-flash but with Tier-1 resiliency Affordable flash option - $1.5/GB usable Advanced Tier-1 data services Flash optimized for performance and cost Immensely scalable but dense all-flash offering Community appreciated continuous flash innovations Literally every deal is sold with VPLEX. All deals of Xtremio include VNX or VMAX, why? XtremIo just like all other flash startups keeps all data in memory and works with it. This automatically means that they cannot scale. You can not have PBs of data and then work them within the memory. This is a big drawback for a customer who wants good performance but also the ability scale in terms of capacity and connectivity.

60 Competitive Summary: HP 3PAR StoreServ 20000
VMAX3 200k/400k HDS VSP G1000 IBM DS 8870 3PAR StoreServ 20000 Inline Deduplication; Thin Efficiency Adaptive Flash Cache Flash optimized – 5 year warranty on SSDs Vvols Support Native Metro-cluster support for Windows and Hyper-V without appliance ESKM Support & FIPS certification Express Writes Converged File, Block, Object Access Single Architecture & OS across portfolio Doing encryption on drives is useful because it offloads the CPU of the encryption work. This allows you to churn more IOPS.

61 Competitive Summary: HP 3PAR StoreServ 20000
VMAX3 200k/400k HDS VSP G1000 IBM DS 8870 3PAR StoreServ 20000 Synchronous Replication Asynchronous Periodic Replication (Bandwidth efficient delta transmission only) Asynchronous Streaming Replication Quality of Service with Latency Goals External Storage Virtualization Native Federation Support without an appliance 16Gb/s FC Support Ease of use & management

62 Consolidate Traditional Tier-1 Storage
(3) EMC VMAX 400K arrays: 7x the sprawl 8x the power 8x the heat Scalable. Flexible. Resilient. Futureproof HP 3PAR StoreServ Converged Flash Storage Up to 15 PB usable Up to 2.5M <1ms* latency Tier-1 Resiliency Federated data mobility 1 1 3PAR StoreServ 20800 vs. 1 1 1 12 PBs Usable 21 Racks 90% HDD / 10% SSD 12 PBs Usable 3 Racks 100% SSD

63 Limited Scale of Flash Vendors
XtremIO 8 X-bricks cluster: 8x the systems 2.5x the racks 2.5x the power Limited Tier-1 Resiliency Scalable. Flexible. Resilient. Futureproof HP 3PAR StoreServ All-Flash Storage Up to 12 PB usable Up to 3.2M <1ms* latency Tier-1 Resiliency Federated data mobility Complexity of managing 8 systems  Simplicity of managing just one  4 3PAR StoreServ 20850 vs 4 1 1 1 12 PBs Usable 3 Racks 100% SSD 12 PBs Usable 8 Racks 100% SSD

64 Sales Enablement Tools & Other Resources
Availability NINJAStars Simulation data – June 1 Empirical data – August 27 HP Sizer Configuration Guide End of May 2015 Quickspecs June 1 SF Remote Available SPOCK Service Access Workbench Installation Guide Site Planning Guide Service & Upgrade Guide Drive Service Guide Drawer statement on availability of 20000 CFA Battlecard All Flash Battlecard 3PAR Internal Sales Portal (What’s New) Customer presentation Technical presentation Data Sheet 3PAR StoreServ Architecture Modern Tier-1 Storage Whitepaper Coming post launch: Solution whitepapers on Oracle & SQL SAP HANA TDI certification SPC-1 Third party whitepaper updates Available by August 13 NINJAstars for will be available with model data to support the June 1 launch. However, the update NINjastars with actual empirical data will be available only at the time of factory CTR which is August 28, 2015. The same is true for the HP Sizer. The configuration guide for the platform should be available by the end of May. The quickspecs will be available on the date of launch. SF Remote will be available. I suggest that this be used effectively to identify install base opportunities for the On the marketing front, all the hardware related manuals and guides will be available by August 13th, 2015. The drawer statement on the availability of the will be available as soon as possible. Please refer to the internal sales portal “What’s New” section for all related marketing collateral. Also coming post launch are: Solution whitepapers on SQL and Oracle SAP HANA TDI Certification SPC-1 benchmarks Third party whitepaper updates for the new platform

65 One Architecture that delivers it ALL
Converged data services for massive consolidation High Performance Massively parallelized architecture with system wide striping Multi Tenant QoS controls for IOPs, Bandwidth, Latency and sub ms rules Flash-Optimized Flash-Optimized Architecture with true extension of Flash Disaster Recovery Data Protection between midrange, all-flash, and high-end systems Application Integration VMware VVols support (Reference architecture) Drive Efficiency Thin Technologies including inline deduplication for flash and Adaptive Sparing Ease of Use Self configuring, optimizing, and tuning Federated Non-disruptive native data mobility between midrange, all-flash, and high-end systems Enterprise customers cannot afford to compromise on data protection, performance, or scalability in their multi-tenant, mixed workload environments. New 3PAR StoreServ Storage features solve the following problems: Arrays that don’t scale to meet the needs of cloud-based data centers Complicated and time-consuming disaster recovery that leaves too long a gap in recovered data Performance inefficiencies with high-intensity read workloads (for example, OLTP workloads generated by SAP and Oracle environments) Inadequate performance of mission-critical applications has business impact HP 3PAR StoreServ Storage delivers Tier 1 capabilities and enhanced QoS for the most demanding cloud and virtualized data centers via high availability, autonomic performance optimization, and assured performance for your most important applications while delivering long-term investment protection. HP 3PAR StoreServ Storage platform enhancements to strengthen Tier 1 storage and Disaster Recovery capabilities for today’s hybrid private cloud environments: Increased scale-out capabilities suited to the most demanding data centers (3.1.3 enhancements plus support for 30% more system capacity) Remote Copy Asynchronous Streaming provides disaster recovery that supports mission-critical data availability/ business continuity with nearly instantaneous Recover Point Objectives (RPOs). Persistent Ports controller node connection status detection added. Supports high availability in virtualized environments by automatically failing over a front-end controller node port that experiences physical connection loss such as a cable or switch failure [Not for PR outreach]Enhanced security for Tier 1 data centers including Common Criteria Certification for HP3PAR Operating System software and Federal Information Processing Standards (FIPS)-certified drives, both of which are important in Health Care and Federal accounts HP 3PAR StoreServ Storage enhancements to meet enterprise QoS performance requirements: HP 3PAR Adaptive Flash Cache software accelerates performance by autonomically detecting high-intensity read workloads and moving most-accessed data onto flash-extended cache Adaptive Optimization separation of read and write workloads (complements Adaptive Flash Cache by providing greater accuracy and performance in managing long-term shifts in workload patterns) Priority Optimization now delivers a minimum performance threshold that ensures priority QoS for your most mission-critical apps Support for 400-GB and 800-GB next-gen multi-level cell (MLC) drives that lower the cost of SSD performance by 20%-XX% (coming in December); new class of “Enterprise Value SSDs” (coming in February) that includes 920-GB and 1.9-TB options for serving read-intensive workloads at XX% the cost of single-layer cell (SLC) SSDs [Not for PR outreach] Peer Motion support for single volume migration and online Windows cluster migration Detects laser loss with FC, FCoE, iSCSI controller node ports. According to the Liverpool BRT deck slide 7, this feature provides unique differentiation in the industry. What we will offer as of December/January: --“Enterprise Value SSDs”=New 920 GB and 1.9 TB MLCs offering a more affordable class of SSDs for read-intensive application workloads. --“Enterprise Mainstream SSDs”=Currently available 400 GB MLCs, being replaced in December with a choice of next-gen 400 GB or 800 GB MLCs that deliver a minimum 20% cost savings for customers with lower to moderate write-intensive application workloads. --Currently supported 100 GB and 200 GB SLCs to address the needs of extremely write-intensive applications. Autonomic scaling of storage capacity to as much as 3.2PB (30% platform increase) as customer data grows, more virtual servers are added, and as workloads become more diverse versus 2.9PB for VNX, 4PB for VNX and up to 16PB for VNX-2   Fast and painless recovery in the event of a disaster with Remote Copy Asynchronous Streaming, which delivers near-synchronous recovery point objectives (RPOs) Business continuity protection in virtualized environment with Persistent Ports detection of physical line loss due to cable and/or switch failure, which triggers autonomic port failover Accelerated tiered storage performance in response to high-intensity read workloads (or example, in Oracle and SAP environments) with lower latency using new Adaptive Flash Cache software  Greater accuracy and performance in managing long-term shifts in workload patterns with addition of read and write workload separation to Adaptive Optimization software Assured QoS with Priority Optimization software, which supports key business objectives by making sure critical applications get all the performance they require Cost savings for customers who require solid state drive (SSD) performance through support for 400-GB and 800-GB next-gen multi-level cell (MLC) drives that lower the cost of SSD performance by 20%-XX%  (coming in December) Support for a new class of Enterprise Value SSDs that will initially include 920GB and 1.9TB options for serving read-intensive workloads at XX%  the cost of single-layer cell (SLC) SSDs (coming in January) 2 More granular data mobility due to Peer Motion support for single volume migration and online Windows cluster migration to balance performance and cost (note: this is a catch-up feature so we will not promote via outbound marketing efforts) Requires sufficient network bandwidth.

66 Backup

67 Tier 1 Enterprise Flash Storage
3PAR OS Manchester release Applications File Persona support on Converged Flash and All Flash systems 128TB per node pair up to total 256TB max usable file capacity 1500 users per node pair up to 6000 users max Up to 6 x 10GbE NICs; 12 x 10GbE ports per node pair Network bonding mode 1 or 6 across all ports in a node Virtualization SMB Best in class Quality of Service NFS File Servers Databases REST 20850 All-Flash 20800 Converged Flash

68 Sales Enablement Tools - Today
SF Remote, SFMM, NinjaSTARS for 3PAR and HP Storage Sizer Upgrades & Refresh older 3PARs SF Remote Transform both the Customer’s environment and the Upgrade and Refresh Business Understand array utilization and trends 1 2 3 Prioritize customers to pitch upgrades and refresh Assess EVA, XP and EMC Environments SFMM Understand customer’s environment better before proposing a solution Save time through automated compatibility analysis and generation of EMC-3PAR OIU Scripts Size and build the right solution NinjaSTARS for 3PAR Size based on the capacity, performance and the configuration desired HP Storage Sizer

69 End-to-end: Flash-optimized data integrity, availability, and protection
1 Issue: ‘Silent Corruption’ Media and transmission errors can be caused by any component in the I/O stack. Solution: 3PAR Persistent Checksum powered by 3PAR Gen5 ASIC End-to-End Data Integrity via T-10 Protection Information. Transparent to hosts/apps Enabled via integration with host HBAs and HP drivers Data checks from host to HP 3PAR StoreServ back-end. Now that we have talked about flash affordability and our new Enterprise Flash models… let’s transition a little and cover how we are optimizing the entire IO path for Flash And it starts with data integrity – specifically how do ensure that the data that gets written to disk was the same data that customers intended to write….. When media and transmission errors can be caused by any component in the I/O stack. Traditionally, protecting the integrity of customers’ data has been done with multiple discrete solutions….. And there has always been gaps across the I/O path from the operating system to the storage. Today - We are the only ones who can ensure end to end data integrity for any workload AND be completely transparent to apps/hosts. …… We do this by ensuring that data is validated as it moves through the data path, from the application, to the HBA, to storage, enabling seamless end-to-end integrity.

70 End-to-end: Flash-optimized data integrity, availability, and protection
2 Issue: 10 milliseconds in HDD design doesn’t work in a sub- millisecond flash world. Need low RPO ‘near-exact’ copy w/o performance loss. Solution: 3PAR Asynchronous Streaming Replication No compromises on performance vs. low-RPO data protection Protect from disasters that affect hundreds/thousands of miles Get your business up & running fast with minimal data loss Now that you have ensured that the data that got written to disk was the same data that you intended to write…… how do you take the next step….. And protect that data in case of a disaster? In a traditional spinning media world where performance was measured in tens of milliseconds….. Adding some latency for an exact copy of data over extended distances was acceptable for SLAs… But this doesn’t work in our new flash word…. Where performance is now measured in sub milliseconds. Keeping an exact copy is gonna impact your performance ….. Now you need a solution that keeps a “near-exact” copy without impacting SLAs. This is where 3PAR Remote Copy with new support for Asynchronous Streaming comes in. No compromises on performance vs data protection. You are now able to protect against disasters that affect hundreds/thousands of miles and get your business up & running fast with minimal RPO measured in seconds. Another thing to keep in mind is that HP 3PAR offers the most complete set of Disaster Recover solutions in the industry. This is a fact and not opinion. 1 - With one license of 3PAR Remote Copy…. Our customers can choose between 4 different replication options depending on their requirement. And 2 – we are the only storage vendor that offers native Disaster Recover across midrange, enterprise flash and all-flash. As an example, this means you can run all-flash in your production environment and place a cost effective midrange based systems at your backup. So the key takeaway is take advantage of this complete DR portfolio and use it drive more flash adoption…. Async streaming with RPOs measured in seconds No compromises on performance vs. data protection 33% lower min RPO than EMC SRDV/A Protect across hundreds to thousands of miles Replication Modes: (IVAN TABLE FROM DEEP DIVE) Sync ( ‘MICRO DELTA SETS AND BROADCAST CAPABILITY OF THE ASIC’ – SIEMACK SLIDE ON MAINTAINING CRASH CONSISTENCY 300 MILLISECONDS WORTH OF DATA ENABLES MILLISECOND RPO TO A FEW SECONDS OF RPO BECAUSE LUN IS COMPLETELY DISTRIBUTED ACROSS ALL CONTROLLERS THE LOAD IS AUTOMATICALLY BALANCED SO THERE IS NO COMPLEX TUNING FOR REPLICATION TO MAINTAIN LOCAL PERFORMANCE. ALSO ASSURES WHEN WE HANDLE WRITE BURSTS THE APPLICATION PERFORMANCE IS NOT DEGRATED WE PUT THE DATA ON THE WIRE INSTANTLY… AS LONG AS THERE IS ADEQUTE BANDWIDTH TO ABSORB DATA FROM PRIMARY YOU’LL HAVE THE SLA. IF BANDWIDTH GOES DOWN OR LINKS ARE NOT KEEPING UP WE CAN SWITCH FROM ASYNC TO ASYNC PERIODIC USING SNAPSHOT LOGS… IN 5 MINUTE INCREMENTS…

71 End-to-end: Flash-optimized data integrity, availability, and protection
Express Protect 3 Issue: Snapshots alone not providing a resilient copy. Legacy hierarchical backup requires added network traffic through hosts and added ISV costs. Solution: 3PAR Protection with StoreOnce RMC-V Flat Backup One-click protection direct from applications with rapid recovery 17x faster with changes copied directly to StoreOnce Backup No added ISV or host traffic required to simplify and lower costs And finally what about backup. In a flash world with ever increasing amounts of data – will you be able to continue to meet your backup windows? The problem with Legacy hierarchical backup processes is that it requires added network traffic through hosts and added ISV costs. Our solution is HP StoreOnce Recovery Manager Central (RMC). This software integrates 3PAR StoreServ with StoreOnce Backup systems. We call this a flat back up …. Essentially you are bypassing traditional backup server- based processes. And as a result…. You get simplicity with one click protection…… You get speed – up to 17x faster backup…… and finally you get savings by not having to purchase an ISV

72 NEW: An elastic resource pool with one-step workload balancing
Up to 60PB and 10M IOPS of Federated Storage with Zero added overhead Online Import 3PAR with Federation Deployment Native and Built-in Management One-Step One Interface Efficiency Thin Aware Performance No Impact Cost $ EMC VPLEX Virtualization Separate device in data path Many Clicks Many Interfaces Fat Added latency Limited Bandwidth $$$ 1 Online Import Supported platforms HDS TagmaStore NSC, HDS TagmaStore USP, HDS USP V, HDS USP VM, HDS VSP, VMAX, VNX, VNX2 Our Storage Federation vision is to create Agile datacenters that can be managed as aggregated elastic pools of capacity. In June we will be making this much more of a reality when we announce bi-directional peer motion capabilities. This is a huge differentiator for us… particularly in enterprise environments where its common for customers to have multiple storage systems within their data center. Now they have a simple and efficient way to optimize cost by moving data back and forth based on their SLAs So here’s just one of the many ways your customers could take advantage of this feature…. -They could run most of their dev and test on thier most cost effective platform – a 2 node 7200 -For their production workloads where they want to optimize cost – go with a 4 node 7400 -For their production workloads where they want to optimize performance - go enterprise flash with a 20800 -And finally, when they need application acceleration for their development or production workloads – go with a And with the 20850, I’m also showing the added flexibility of essentially virtually carving up the system by IOPs/latency/bandwidth with Priority Optimization for Quality of Service to assure SLAs for their production workloads. And of course Priority Optimization could be deployed on any 3PAR system We are also adding HDS support to our Online Import capabilities. We now support simple and efficient migration from EMC VMAX, EMC VNX and HDS. All this can be done without the complexity and risk associated with adding a hw appliance. Instead deploy a sw refined strategy and map workloads with one click. Simply the easiest way to modernize your EMC VMAX, EMC VNX and now HDS storage infrastructure

73 Limited Scale of Flash Startups
Pure Storage FA450: 48x the systems 8x the racks 5x the power Limited Tier-1 Resiliency Scalable. Flexible. Resilient. Futureproof HP 3PAR StoreServ All-Flash Storage Up to 12 PB usable Up to 3.2M <1ms* latency Tier-1 Resiliency Federated data mobility 16 16 3PAR StoreServ 20850 vs 1 1 1 16 12 PBs Usable 3 Racks 100% SSD 12 PBs Usable 24 Racks 100% SSD

74 How does the HP 3PAR StoreServ 20000 compete?
The Competition How does the HP 3PAR StoreServ compete? Parameters EMC VMAX3 200K EMC VMAX3 400K HDS VSP G1000 IBM DS8870 HP 3PAR StoreServ 20800 Compute element Engine (1-4) Engine (1-8) Virtual Storage Director Pair (1 – 8) Controller Node (2, 2 cores – 16 cores) Controller Node (2 – 8) Total Cache 8TB 16TB 2TB 1TB 1.8TB On-Node; 32TB Flash Cache cMLC support No, 1.6TB MLC No, 3.2TB FMD Yes, 3.84TB cMLC Max drives 2880 5760 2304 1536 1920 Ports 128 16Gb/s 64 10GbE 24 10GbE NAS 256 16Gb/s 128 10GbE 96 16Gb/s 88 10GbE 128 8Gb/s 160 16Gb/s 80 10GbE 48 10GbE FBO Max usable capacity 2PB 4PB 3PB 15PB Max. distance between racks 25m 100m Max Internal Bandwidth 700 GB/s 1,400 GB/s 896 GB/s NA 2,048 GB/s

75 The Competition Comparing AFAs Parameters Pure FA-450 XtremIO
HP 3PAR StoreServ 20850 Backend Interface 6 Gb/s Infiniband – 40 Gb/s 12 Gb/s Max drives 140 200 1024 Largest SSD available 512GB 1.6TB 3840 GB 16Gb/s FC ports 8 Gb/s FC ports 12 - Not available 32 160 16Gb/s 10 GbE iSCSI/FCoE ports 80 (FCoE also) Max usable capacity 250 TB 1612 TB 12 PB Max IOPS 200K (100% R, 32K) 1200K (70% R, 30% W 8K) 3.2 Million (100% RR, RAID 5, 8K) Max Backend Bandwidth 7 GB/s 24 GB/s 75 GB/s File Persona No On-node

76 HP all-flash innovations collapse desktop management costs
HP 3PAR StoreServ Flash Storage with proven, tested client virtualization solutions from VMware Horizon View and Citrix XenDesktop  deliver compelling benefits, such as: Improved return on investment (ROI) with reduced total cost of ownership (TCO) of desktop infrastructure Reduced IT admin time for distributed scans, patches, configuration, and deployment of end-user devices Improved performance through significantly higher IOPS Tier 1 resiliency features deliver capacity and quality for complex client virtualization workloads, without compromising performance Seamless integration features and enterprise-class services allow you to leverage existing storage investments and streamline client virtualization deployments Improved end user experience with lower latencies than traditional storage solutions for static PCs

77 Accelerate Oracle databases
Oracle databases sit at the heart of your business, and demand the highest levels of availability, data services, and performance. But many Oracle database environments are aging and overwhelmed, with increasing SLA demands. The result is storage sprawl, overuse of space and power, and database engines that are unable to keep pace with business needs. HP 3PAR StoreServ Flash Storage boosts Oracle performance and efficiency without disruption Enterprises around the world are realizing the benefits of HP 3PAR StoreServ 7450c Flash Storage as a foundation storage platform for Oracle databases. Benefits include: Performance – >3.2 Million IOPS at sub-millisecond latencies1 Power and space savings— HP all-flash solutions take up less rack space and use less power & cooling leading to significant reductions in energy and space costs High availability –HP All-Flash solutions deliver 6-nines+ mission-critical availability including disaster recovery and business continuity2 Ter-1 data services – Rich data services for mission critical environments Minimally disruptive migration – Using HP’s converged storage architecture, migration tools, and common data services

78 Extend the reach and cost-savings of server virtualization
For virtualized server environments, storage can be a performance bottleneck, creating the need for complex layers of I/O accelerators or workarounds, especially for critical applications. As a result, some of your most demanding workloads have been “off limits” for server virtualization because of steep investments required in storage in order to meet acceptable performance levels. HP 3PAR 7450c All-Flash offers mission-critical performance and availability for virtualized workloads, eliminating the storage bottlenecks for impressive benefits: Enjoy unique HP data mobility and flexibility features, allowing you to adjust storage tiers to changing business needs and to move data seamlessly in virtualized environments without disruption Extend virtualization cost savings to mission-critical workloads by using HP 3PAR StoreServ Flash Storage to meet the performance and reliability SLAs of your most demanding workloads, without having to over-invest in capacity Increase the efficiency of virtualized environments and boost virtual machine (VM) density by eliminating storage performance limitations, using unique HP features such as wide striping Reduce complexity and admin time by speeding up VMware management tasks such as cloning and virtual machine migration

79 When should you or shouldn’t you sell SW I&S services?
You should sell SW I&S when… First 3PAR StoreServ in the customer’s environment First time purchase of the software title New version of the software title with significant enhancements from previous version You may not need to sell SW I&S when… Customer already has HP 3PAR StoreServ with the software title and is comfortable deploying the software themselves on new arrays Customer is buying multiple arrays and will buy I&S for the software title on the first array and deploy the software themselves on additional arrays When should you sell HW I&S services? With all new 3PAR arrays With all 3PAR HW upgrades

80 I&S Service SKU Factory Racked / Field Racked
HP 3PAR StoreServ Hardware New Array Deployment Services Overview Category Product SKU Product Description I&S Service SKU Factory Racked / Field Racked Comments Base C8S83A HP 3PAR StoreServ N Config Base 5WZ / 5X3 One instance of svc added for each Config Base quoted Enclosure E7Y20A HP 3PAR d 2U SFF Drive Enclosure 5X1 / 5X5 1) For factory integrated, one instance of I&S svc #5X1 added for each expansion rack E7Y21A HP 3PAR d 2U LFF Drive Enclosure 2) For field integrated, one instance of I&S svc #5X5 added for each drive enclosure Nodes C8S84A HP 3PAR GB CC/128GB DC Node Always factory integrated - deployment included in Config Base I&S Svc C8S87A HP 3PAR GB CC/256GB DC Node Adapter C8S92A HP 3PAR p 16Gbps FC HBA C8S93A HP 3PAR p 12Gbps SAS HBA C8S94A HP 3PAR p 10Gbps iSCSI/FCoE CAN C8S95A HP 3PAR p 10Gbps FBO HBA Drive J8S05A HP 3PAR GB SAS 15K SFF Drive J8S06A HP 3PAR GB SAS 15K SFF Drive J8S07A HP 3PAR GB SAS 10K SFF Drive J8S08A HP 3PAR TB SAS 10K SFF Drive J8S09A HP 3PAR TB SAS 10K SFF Drive J8S10A HP 3PAR TB 12G SAS 7.2K LFF Drive J8S11A HP 3PAR TB 12G SAS 7.2K LFF Drive J8S12A HP 3PAR GB 12G SAS cMLC SFF SSD J8S13A HP 3PAR TB 12G SAS cMLC SFF SSD J8S14A HP 3PAR GB SAS cMLC SFF FE SSD J8S16A HP 3PAR GB SAS 15K SFF FE Drive K2Q38A HP 3PAR TB SAS 7.2K LFF HDD K2Q50A HP 3PAR GB SAS MLC SFF SSD K2Q54A HP 3PAR GB SAS cMLC LFF SSD K2R25A HP 3PAR TB cMLC SFF SSD M0S93A HP 3PAR TB SAS 7.2K SFF HDD J8S17A HP 3PAR TB SAS 10K SFF FE Drive J8S18A HP 3PAR TB SAS 7.2K SFF FE Drive SP K2R29A HP 3PAR StoreServ RPS Service Processor SP deployment included in Config Base I&S Svc Watson/SBW automatically adds appropriate deployment services to quote based on product SKUs quoted

81 HP 3PAR StoreServ 20000 Storage Array upgrades deployment service overview
SKU Product Description Upgrade I&S Unit of Service SKU Comments E7Y22A HP 3PAR U SFF Upgr Drv Enclosure 5X7 1) Qty ten of UoS Upgd I&S Svc added for each field integrated upgd expansion rack E7Y23A HP 3PAR U LFF Upgr Drv Enclosure 2) Qty three of UoS Upgd I&S Svc added for each field integrated upgd drive enclosure C8S85A HP 3PAR GB CC/128GB DC Upgr Node Qty twelve of UoS Upgd I&S Svc added for each field integrated upgrade node C8S88A HP 3PAR GBCC/256GB DC Upgr Node C8S96A HP 3PAR p 16Gbps FC Upgrade HBA Qty three of UoS Upgd I&S Svc added for each field integrated upgrade HBA/CNA C8S97A HP 3PAR p 12Gbps SAS Upgrade HBA C8S98A HP 3PAR p 10G iSCSI/FCoE Upgr CNA C8S99A HP 3PAR p 10Gbps FBO Upgr HBA J8S19A HP 3PAR GB SAS 15K SFF Upg Drv Qty one of UoS Upgd I&S Svc added for each 12 field integrated upgrade drives J8S20A HP 3PAR GB SAS 15K SFF Upg Drv J8S21A HP 3PAR GB SAS 10K SFF Upg Drv J8S22A HP 3PAR TB SAS 10K SFF Upg Drv J8S23A HP 3PAR TB SAS 10K SFF Upg Drv J8S24A HP 3PAR TB SAS 7.2K LFF Upg Drv J8S27A HP 3PAR TB SAS 7.2K LFF Upg Drv J8S28A HP 3PAR GB SAS cMLC SFF Upg SSD J8S29A HP 3PAR TB SAS cMLC SFF Upg SSD J8S30A HP 3PAR GB cMLC SFF FE Upg SSD J8S32A HP 3PAR GB SAS 15K FE Upg Drive K2Q39A HP 3PAR TB SAS 7.2K LFF Upg HDD K2Q51A HP 3PAR GB SAS MLC SFF Upg SSD K2Q55A HP 3PAR GB SAS cMLC LFF Upg SSD K2R24A HP 3PAR TB cMLC SFF Upg SSD M0S94A HP 3PAR TB SAS 7.2K SFF Upg HDD J8S33A HP 3PAR TB SAS 10K FE Upg Drive J8S34A HP 3PAR TB SAS 7.2K FE Upg Drive Watson/SBW automatically adds appropriate deployment services to quote based on product SKUs quoted. Minimum UoS Updg I&S Svc qty added to quote is 14.

82 HP 3PAR StoreServ 20000 SW Deployment Services
Product Description License Type I&S Svc Package I&S Svc Band PS Svc Package PS Svc Band PS Svc Qty Comments HP 3PAR OS Suite Base + Drive HA124A1 5XC System Reporter SW I&S Svc added when 3PAR OS Base quoted HP 3PAR Adaptive Optimization 5X8 HK696A1 2BT 2 PS Svc provides AO Policy Design and Implementation HP 3PAR Data Optimization Suite v2 5XA HP 3PAR Dynamic Optimization 5XB HP 3PAR Peer Motion 5XD HP 3PAR Priority Optimization 5XE HP 3PAR Replication Suite 5XF I&S Svc provides Level 1 deployment of VC, RC, PP only Level 2 and 3 svcs available for advanced impl of VC, RC, PP For CLX deployment, custom impl svc HA115A#57M required HP 3PAR Virtual Copy 5QW I&S provides Level 1 deployment of VC Level 2 and 3 svcs available for advanced VC implementation HP 3PAR Remote Copy 5QV I&S provides Level 1 deployment of RC Level 2 and 3 svcs available for advanced RC implementation HP 3PAR Business Continuity Suite Frame HA115A1 5X9 Provides custom quote implementation of CLX & Peer Persistence HP 3PAR App Suite for MS Exchange 5XG HP 3PAR App Suite for MS Hyper-V 5XH HP 3PAR App Suite for Oracle 5XJ HP 3PAR App Suite for MS SQL 5XK HP 3PAR App Suite for VMWare 5XL HP 3PAR File Persona Suite Capacity 5XM Quote one instance of I&S svc per array when File Persona quoted

83 HP 3PAR StoreServ 20000 deployment services
Additional resources Hardware HP 3PAR Software Installation and Startup Service datasheet: HP 3PAR Replication Software Suite Installation and Startup Service datasheet: HP 3PAR Business Continuity Software Suite Implementation Service datasheet: HP Data Replication Solution Service for HP 3PAR Virtual Copy datasheet: HP Data Replication Solution Service for HP 3PAR Remote Copy datasheet: HP 3PAR StoreServ Storage Installation and Startup Service datasheet:


Download ppt "HP 3PAR StoreServ 20000 Storage The Enterprise Flash Array for Virtual and Cloud Data Centers Accelerate Business. Dominate Competitors."

Similar presentations


Ads by Google