Download presentation
Presentation is loading. Please wait.
Published byMaximillian Lee Modified over 9 years ago
1
HP 3PAR Product Management HP 3PAR Tech Marketing July 2015
HP 3PAR OS 3.2.2 HP 3PAR Product Management HP 3PAR Tech Marketing July 2015 HP Restricted – for HP and HP Channel Partner Internal Use Only
2
Confidential Disclosure Agreement
Center for Sales & Marketing Excellence Training Express Webinars Confidential Disclosure Agreement The information contained in this presentation is proprietary to Hewlett-Packard Company and is offered in confidence, subject to the terms and conditions of a binding Confidential Disclosure Agreement (CDA) HP requires customers and partners to have signed a CDA in order to view this training The information contained in this training is HP confidential, but will become HP restricted after August 24th, 2015 This information may only be shared verbally with HP- external customers and/or partners under NDA, and only with HP Storage director-level approval. Do not remove any classification labels, warnings or disclaimers on any slide or modify this presentation to change the classification level. Do not remove this slide from the presentation HP does not warrant or represent that it will introduce any product to which the information relates The information contained herein is subject to change without notice HP makes no warranties regarding the accuracy of this information The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty HP shall not be liable for technical or editorial errors or omissions contained herein Strict adherence to the HP standards of business conduct regarding this classification level is critical. This presentation is NOT to be used as a ‘leave behind’, and therefore, should not be distributed/released in either hard-copy or electronic format.
3
HP 3PAR OS 3.2.2
4
HP 3PAR – High level OS Evolution
HP 3PAR OS 3.2.2 HP 3PAR StoreServ 8000/20000 Storage Systems HP 3PAR StoreServ Management Console 2.2 Support for Higher Scalability Persistent Checksum Remote Copy Asynchronous Streaming Peer Persistence for RHEL HP StoreOnce Recovery Manager Central 1.1 Storage Federation (4x4 multi-directional) Online Import for HDS Priority Optimization sub-millisecond latency goal Adaptive Flash Cache enhancements iSCSI VLAN tagging VMware VVOLs higher scalability Autonomic Rebalance enhancements On-node System Reporter changes Adaptive Optimization new options LDAP improvements SmartSAN support HP 3PAR OS 3.2.1 7000 Converged models HP 3PAT StoreServ 7440 Adaptive Flash Cache Express Writes FIPS EKM AO on VVsets Peer Persistence for MSFT VMware VVOLS Tunesys fixes Resiliency improvements Thin Deduplication File Persona 1,92/3.84TB cMLC SSDs HP 3PAR OS 3.1.3 Max limits increase Performance Optimizations Priority Optimization: Latency Goal MxN Replication Adaptive Sparing 480GB/920GB SSD with 5 Year Warranty 1.2TB 10K and 4TB 7.2K HDDs Upgrade Automation (SW/Drives) Peer Motion: load balancing and cluster support Resiliency improvements SR-on-Node Performance Alerts Online-Import for EMC HP 3PAR OS 3.1.2 HP 3PAR StoreServ 7000 Thick to Thin Conversion EVA to 3PAR Online Import Persistent Ports Peer Persistence Windows 2012 ODX support Online VVCopy Restful API HP 3PAR StoreServ 7450 Flash Optimized Architecture DAR Encryption Peer Persistence with ATF Priority Optimization: Max Limit Remote Copy enhancements FCoE OpenVMS Support 2013 2014 2015 ATF = Automatic Transparent Failover, with vMSC certification
5
HP 3PAR OS – Release Plan HP 3PAR OS is the SW enabler for GEN5 platforms. 7000 and systems will continue to ship on MU3 Customers who choose to stay on 7K / 10K after the release of the 8K/20K are looking for consistency across their environment and/or do not like to be early adopters of OS releases. Those who want can upgraded to via the controlled release process. As the 3PAR OS code is released to manufacturing we will start the controlled release process Existing customers, POC, demo units can be upgraded to We will communicate dates and remind people of the process in the following weeks.
6
3.2.2 Milestones Activity Date Comment HP 3PAR 8000/20450 TekTalk
8/3/15 Learn more about the new HP 3PAR StoreServ platforms Controlled Release (CR) registration 8/6/15 HP 3PAR SSMC 2.2 TekTalk Deep Dive in all new SSMC 2.2 HP 3PAR Async Streaming Replication TekTalk Green Zone and Async Controlled Release details Controlled Release (CR) – SW upgrades 8/13/15 Existing systems can upgrade to Pre-release of Release Notes and Upgrade Guides available on CR site. Documents/SPOCK updates 8/24/15 HP 3PAR Storage Federation TekTalk 8/27/15 How to sell the value of Storage Federation and it’s key differentiators 3.2.2 FA Matrix Sept 2015 Performance/Sizing/best practices TekTalk 9/3/15 Performance, sizing, configuration, and pricing best practices for the new HP 3PAR StoreServ and StoreServ 8000 storage systems iSCSI TekTalk 9/17/15 All you ever want to know around iSCSI and it’s best practices
7
What’s new in 3.2.2 Feature Descriptions Collateral
HP 3PAR 20000/8000 support Support for new GEN5 ASICs capable systems QuickSpecks/Datasheet Higher scalability Increased scalability for base volumes, snapshots, initiators across all systems FA/Support Matrix FC 16Gb/s Expanded interop matrix, support for RCFC and Peer Motion. (NO direct connect support) SPOCK updates Persistent Checksums Support T10-PI on GEN5 ASIC System Reporter More control on .srdata volume (grow/snap) and compare-by (top x items over time) Updated System Report white paper Remote Copy enhancements Asynchronous streaming replication (new asynch mode) Sync over FCIP Sync RTT up to 5ms (includes all transports and PP) Updated RC white paper Peer Persistence new OS support Linux RHEL 6.x Oracle RAC on RHEL Oracle RAC on Windows Updated PP White Papers Peer Motion 4x4 multi-directional data mobility SSMC orchestration On-line import for HDS and VNX2 New Storage Federation White Paper File Persona 128TB node pair max capacity Support to use on-board Ethernet card for File (aka RCIP port) Adaptive Flash Cache Improved to also accelerate 64KB random reads Support to be created using R0 layout Updated AFC white paper
8
What’s new in 3.2.2 Feature Descriptions Collateral iSCSI
Support for Persistent Ports “loss_sync” recovery, automatic failover in case of a failure due to loss of signal, link down between the array and the switch. Support for SendTargets discovery Support up to 256 iSCSI initators per port on HP 3PAR 20000/8000 Support iSCSI VLAN tagging on HP 3PAR StoreServ 20000/8000 Support for Enterprise iSCSI (iSCSI over DCB/lossless Ethernet) on HP 3PAR StoreServ 20000/8000. Support iSCSI IPV6 on HP 3PAR StoreServ 20000/8000 Updated Persistent Ports white paper Creation of a new iSCSI BP White Paper VMware VVOLS Support for 3,000 guest that use VVOLs Support for multi-tenancy (same array with multiple vCenters) SPOCK updates Autonomic Rebalance Automatic tunesys upon drive upgrades on 7200 and 8200 Autonomic Layout Improved out of the box CPG creation on all-flash systems Adaptive Optimization Min/Max space utilization per tier (force more or less data to SSD) Updated AO White Paper LDAP Support for multiple LDAP servers, this version allows an unlimited numbers of LDAP servers to be defined. Support multiple domains, this version allows any users defined in a domains that fall under the same root domain. Ability to lookup servers in DNS. Load balancer support, via a new ldap-type option Smart-SAN Zoning automation Priority Optimization Sub-millisecond latency goals
9
Manageability SSMC 2.2 will be the default management console and the default reporting tool Support for GEN5 ASIC systems, Federation, Asynchronous Streaming, QoS, Flash Cache, SR compare by/with IMC 4.7 release will /8000 Support transition from IMC to SSMC of existing IMC customers Last IMC version. IMC EOL planned 1H2016 SP 4.4 Support for new physical SP HW (redundant power supplies) External System Reporter 3.1 MU4 Support for existing 10k and 7k systems running HP 3PAR OS 3.2.2 No public support for 20000/8000, but process in place to support exceptions.
10
Host Based products The following Host Based products will align with the controlled release and be available on the various download portals on the CTR date. Media Kit/Software VSS Provider 2.5 Mgmt code media kit IMC 4.7 SSCM 2.2 CLX 4.0.2 Quorum Witness Host Explorer 3.1.1 RMC-V 1.1 Recovery Manager Central Media Kit Recovery Manager for Exchange 4.7 Recovery Manager for SQL 4.7 Recovery Manager for Hyper-V 2.2 Recovery Manager for Oracle 4.7 Recovery Manager Media-Kits SP 4.4 and SmartStart 1.5 OS Media Kit Note: RMV customers MUST migrate to RMC-V when upgrading to
11
What’s not covered in this TekTalk
Asynchronous Streaming Replication Storage Federation Online Import SSMC 2.2 iSCSI deep dive SmartSAN
12
Scalability
13
7000/10000 Scalability Object 7200/c 7400/c 7450/c 10400/7440c 10800 Base VV 16k 32k 64k Total VV 128k Max VLUN 256k FC 8 GB/s Port Initators 128 FC 16 GB/s Port Initators 256 iSCSI/FCoE Port Initators 64 (iSCSI) 64 (FCoE) FC system initators 1,024 3,072 (2,048 can be 16Gbs ports) 3,072 – 7440 (2,048 can be 16Gbs ports) 4,096 – 10400 8,192 iSCSI/FCoE/System Initators 256 (FCoE) 256 (iSCSI) 512 (FCoE) 512 (iSCSI) 512 (FCoE,iSCSI) – 7440 1,024 (FCoE, iSCSI) – 10400 2,048 (iSCSI, FCoE) Remote Copy Max VVs – Sync 800 800 – 2 nodes 2400 – 4 nodes 2400 – 4+ nodes Remote Copy Max VVs - Async Periodic 2,400 2,400 – 2nodes 6,000 – 4 nodes 6,000 – 4+ nodes Peer Persistence Max VVs 600 NOTE: All these numbers are still goals and not final. The FA Matrix will have the final and supported numbers.
14
8000/20000 Scalability Object 8200 8400 8440 8450 20450 20800 20850 Base VV 32k 64k Total VV 128k Max VLUN 256k FC 16 GB/s Port Initators 256 iSCSI/FCoE Port Initators FC system initators 2,048 4,096 8,192 iSCSI/FCoE/System Initators 1,024 Remote Copy Max VVs – Sync 800 800 – 2 nodes 2400 – 4 nodes 2400 – 4+ nodes Remote Copy Max VVs - Async Periodic 2,400 2,400 – 2nodes 6,000 – 4 nodes 6,000 – 4+ nodes Peer Persistence Max VVs 600 NOTE: All these numbers are still goals and not final. The FA Matrix will have the final and supported numbers.
15
Thin Deduplication
16
HP 3PAR OS 3.2.1 MU3 and 3.2.2 improvements
3.2.1 MU3 Improvements Resiliency Fixed Trapped space not released on upgrade GC abandons during snapshots, periodic Remote Copy, rebalance (tunesys) scenarios Zero detect issue that triggers host I/O to hang Space Efficiency / Accounting 10x more-frequent refreshes of showcpg & dedup ratios User command to force a refresh 3.2.2 Improvements Space Efficiency / Accounting DDS Defrag on GEN5 ASIC capable systems. Non contiguous free 16KB pages will be defragged and can be compacted/released back to the system to be used by other CPGs or TPVVs in the same CPG. Dedup Estimator Support for CPVVs. New option to disable dedup globally Setsys DisableDedup - New write requests to tdvvs serviced by the system will not be deduplicated if this parameter is set to " yes". Setting this parameter to " no" will enable writes to tdvvs to be deduplicated. The default value is " no".
17
Example of TDVV performance improvements
Code Workload Performance MU2 4KB 100 RW CPU 80-90% busy 53k MU3/3.2.2 CPU 20-30% busy 65k 16 VM’s(over 4 esx servers) : Web Server – 4K 100% RR (outstanding IO 8) Exchange2010 Data – 32K 73%Read, 100% random (outstanding IO 4) Backup reader – 63K 100%Read, 80%Seq (outstanding IO 2) 4K OLTP – 4K 67% Read, 100% random (outstanding IO 64) 101K 1.5ms 190K 0.79ms
18
3PAR Deduplication – June 2016
> 1,000 systems are reporting back with TDVVs configured This includes POC systems, internal systems, systems with only a few TDVVs The data used for the graphs includes systems that meet the below criteria: >7 TDVVs ratios >0 and <20 NO HP system (Demo/Internal/POC) Average 2.1 Median 1.5 # of systems
19
Block Size - considerations
Deduplication blocksize is not a major factor when looking at dedup efficiency Real production environment show a wide range of IOs with the bandwidth driven by block-sizes 16KB or larger Example of a system where the majority of IOPs is 4KB However 4KB block size contributes to less than 5% of the overall write bandwidth
20
Best Practices, they make a huge difference
No BP - before BP followed - After Linux ext3 filesystem : dedup ratio 1.7:1 Migrated to xfs - Dedup ratio 4.2:1 Hyper-V migration, no sdelete and default 4KB allocation unit: Dedup ratio : 1.1:1 Compaction ratio : 2.7:1 Migrated to 16KB au volumes, run sdelete Dedup ratio 1.6:1 / Compaction ratio 4:1 Xen Server: dedup ratio 1.8:1 Upgraded to MU3/retune Dedup ratio 2.7:1
21
Summary When looking at deduplication data reduction there are 3 buckets: VDI deduplication produces great savings for both persistent and non-persistent Data-Set is non-deduplicable It’s the nature of the data set, it’s all unique data, period. Mixed-workload Where most of the customers falls and if the two previous categories don’t apply will produce savings within the range The average data reduction is aligned with what achieved by Pure Storage that is perceived as the industry leader when it comes to deduplication. It also aligns with data from a recent IDC flash study. The main challenge to overcome is to explain and invest cycles around tweaking the solution (follow the best practice), when that is done deduplication performs great (given the data set is deduplicable).
22
Adaptive Flash Cache
23
Adaptive Flash Cache Objects 7200c 7400c 7440c 10400 10800 8200 8400 8440 20800 AFC Max per Systems 768GB 1.5TB 3TB 4TB 8TB 32TB AFC Max per Node Pair 2TB Min SSD per Node Pair for AFC 4 2xDMAG In by default AFC creation will use R0 layout. R1 will still be possible, and will assure consistent flash cache size and accede even in case of SSD/Node reboot Existing AFC customers will have to remove and re-create Flash Cache if they want to change their layout When R0 is used ‘spare’ capacity can be used – (create flash cache within 24h of installing the drives) HP 3PAR StoreServ can only support R1 for Flash Cache due to the use of DMAGs AFC in will accelerate random-reads misses from 0-64KB (3.2.1 was 0-63KB).
24
AFC on 8440 with R0 8440 with 4xSSD per node pair
SSD type Available Chunklets Per SSD Raw Capacity TiB Usable Flash Cache Overhead – unused Flash 4x480 eMLC 446 1.82 TiB 1.6 TiB none 4x1.92 cMLC 1787 7.3 TiB 4 TiB 2.5 TiB 4x3.82 cMLC 3574 14 TiB 4 TiB 10 TiB AFC with 4*SSD and R0: With 4xSSD the drives will be used for flash cache only. When R0 is used we will automatically re-claim spare chunklets to be used as Flash Cache space up to the max available chunklets – 10% (that will be used for adaptive sparing cMLC handling). The formula for max flash cache with R0 is = [(total chunklets*1024)*4] – 10%
25
Gotcha : AFC on 3.84TB and 1.92TB FIPS/SED
SSD drives that were introduced in June via MU3 and HP 3PAR OS MU2 Patch 13 have not been added to the AFC check to create Flash Cache on these devices. Creating Flash Cache on these drive will result in a generic out of space message: “no disks with free space in the system”. This shortcoming is addressed in and we are investigating a patch for MU3. AFC cMLC drive support table Drive Model AFC working on MU3 Comment 480 cMLC DOPE0480S5xnNMRI No AFC not supported on 480 cMLC. 1.92 cMLC DOPE1920S5xnNMRI Yes AFC Supported 3.84 cMLC DOPM3840S5xnNMRI AFC Supported, upgrade to 3.2.2 1.92 cMLC SED HSCP1920S5xnFMRI
26
VVOls and Adaptive Flash Cache
Exposed as a capability
27
AFC – Clarification on write support
The purpose of write caching is to lower write latency and increase write throughput The goal of Flash Cache write support is to improve write throughput There are 3 different types of write caches techniques: Write-through Writes are committed to Flash Cache directly no DRAM used Write-back Write are committed to DRAM and then de-staged to Flash Cache Write-around Writes bypass the cache and go to permanent storage Does 3PAR Adaptive Flash Cache have writes support? YES Fewer random read request to the HDD results in more available resources for writes caching resulting in higher write throughput Random Writes request in DRAM are de-stage atomically to Flash Cache and HDD. Random Read request in DRAM are written to Flash Cache Does 3PAR Adaptive Flash Cache support Write-through/back? NO Write-through cache directs write I/O onto cache and through to underlying permanent storage before confirming I/O completion to the host. This ensures data updates are safely stored on, for example, a shared storage array, but has the disadvantage that I/O still experiences latency based on writing to that storage. Write-through cache is good for applications that write and then re-read data frequently as data is stored in cache and results in low read latency. Write-around cache is a similar technique to write-through cache, but write I/O is written directly to permanent storage, bypassing the cache. This can reduce the cache being flooded with write I/O that will not subsequently be re-read, but has the disadvantage is that a read request for recently written data will create a “cache miss” and have to be read from slower bulk storage and experience higher latency. Write-back cache is where write I/O is directed to cache and completion is immediately confirmed to the host. This results in low latency and high throughput for write-intensive applications, but there is data availability exposure risk because the only copy of the written data is in cache. As we will discuss later, suppliers have added resiliency with products that duplicate writes. Users need to consider whether write-back cache solutions offer enough protection as data is exposed until it is staged to external storage. Write-back cache is the best performing solution for mixed workloads as both read and write I/O have similar response time levels.
28
Latency improvements : Production Systems
Even with low FMP % hit (~10%) We see a significant impact in lowering front-end svctimes System analyzed show between 35%-40% lower latency All the data analyzed shows consistent lower latency with less jitter on reads svctimes
29
AFC Performance benefits
Random read acceleration and increased write throughput in mixed workloads Fewer random read request to the HDD results in more available resources for writes caching Test system 7200 with 8x10 k SAS and 668 AFC on 4xSSD 32 kb 60/40 r/w 60/40 sequential / random
30
File Persona
31
New: File Persona in 3PAR OS 3.2.2
Support for 8000 and series Converged Flash and All Flash systems Higher scalability: Double the usable file capacity and increased users Up to 256TB total capacity and 15,000 concurrent users per system More connectivity: Up to 48 10GbE ports on 20000; on-board RCIP port Increased bandwidth to support file I/O and to allow add-on HBA for iSCSI or FC Increased efficiency: Thin Deduplication and reduced housekeeping space More raw capacity with dedupe and housekeeping volume reduced to ~225GB per node Backup efficiency: Snapshot backup of FPG Improved backup efficiency via Virtual Copy snapshot backup for entire FPG Expanded Antivirus Scanning: Support for Trend Micro ServerProtect Industry leading Antivirus vendors coverage along with Symantec and McAfee
32
Scalability: File Persona in 3PAR OS 3.2.2
128TB per node pair up to a max of 256TB aggregate usable file capacity Up to 15,000 concurrent users on a 8 node system 7200c/8200 7400c/ N 7440c/7450c -4N 8440/ N N N N Max usable file capacity 128TiB 256TiB Concurrent users per node pair 1,500 2,250 3,750 3,000 Max concurrent users 4,500 7,500 12,000 15,000
33
Connectivity: File Persona in 3PAR OS 3.2.2
7000c Series 8000 Series 20000 Series 10GbE NIC 1GbE NIC On-board RCIP (1GbE) On-board RCIP (10GbE) Ports per node pair 0,4 0,8 2 0,4,8,12 Network bond* 1,6 None N/A Important points: Network Config in node pair must be symmetric, e.g., 10GbE in both nodes or RCIP used in both nodes Only 1 type of network ports allowed in a bond, e.g. either 1GbE only or 10GbE only RCIP ports and add-on NIC ports can’t be enabled together for File Persona RCIP port to be used with caution: low performance and no port HA, node HA only RCIP port can be used for File Persona or Remote Copy at a time * Mode 6 Default setting
34
New: File Persona in 3PAR OS 3.2.2
Additional enhancements Enhanced networking Network mode 6 (load balancing) for 10GbE on all supported platforms Enhanced DR Support N-1 Remote Copy support ‘Growfpg’ to automatically add new VVs into the existing Remote Copy Cross-Protocol enhancements Simplified Share folder ACL management in restricted cross protocol scenario Config backup enhancements MD5 checksum for config backup data integrity Quotas enhancements ‘Setfsquota’ options to remove a quota Upgrade and Revert Revert to supported except If File Persona was enabled in 3.2.2, remains enabled due to the lack of support for reduced- size housekeeping volumes in 3.2.1 If Dedup volumes are in use for FPGs If10GbE NICs are configured for bond mode 6
35
New: File Persona limits
Highlighted text is new in 3PAR OS 3.2.2 New: File Persona limits Capacity 128TB per node pair up to 256TB max File System (FPG) 32TB per FPG 16 FPG per node pair 32 VVs allowed per FPG (min. 1TB size) VFS 16 VFS per node pair (1 VFS per FPG) 4 VLANs per VFS 4 IP addresses per VFS File Store 256 per node pair Snapshots 262,144 per node pair Users Up to 3750 users per node pair (platform dependent) File Shares 4,000 SMB shares per node pair 1,024 NFS shares per node pair File 2TB file size 128K files per directory 100 Million files and dirs per FPG Quotas 2,000 user/group quotas per node pair 256 capacity quotas per node pair
36
Updated: HP 3PAR File Persona opportunity zones
x Where to position 3PAR File Persona Enterprise file sync and share Home directory consolidation up to 15,000 concurrent users Group/departmental shares Corporate shares Select custom cloud apps Up to 256 TB aggregate file storage capacity Everything else including: Virtualization Database Applications HPC Video editing and media streaming
37
Updated: File Persona feature summary as of 3PAR OS 3.2.2
Supports 3PAR StoreServ 7000c, 8000 and series Converged Flash and All Flash systems File Persona A licensed feature of the 3PAR OS productized as 1TB usable capacity SW Suite LTU Supports up to 256TB aggregate file capacity with user and capacity quota policies Rich protocol support with SMB 3.0, 2.1, 2.0, 1.0; NFSv4, v3; Object Access API File Data Services including user authentication, quota management, File Store snapshots, antivirus scanning Data optimization via Thin Provisioning, Thin Deduplication, Adaptive Optimization, Dynamic Optimization, Adaptive Flash Cache Data protection via Remote Copy replication, network share and NDMP-based backup, and Data at Rest Encryption Solutions enablement for Sync and Share and OpenStack Manila Cloud File Sharing SMB NFS REST FC FCoE iSCSI 3PAR StoreServ Management Console
38
Call to Action Take advantage of all the sales tools and deliverables in HP 3PAR File Persona Power Pack Look for the opportunities for low profile block adjacent file use cases Be careful about positioning against NetApp Promote File Persona sale for the Green Zones
39
Persistent Checksums
40
HP 3PAR Persistent Checksum
The industry term for Persistent Checksums is T10 Protection Information (PI) that allows a checksum to be transmitted from the HBA and application to the disk drive. Why is this important ? Traditionally, protecting the integrity of customers’ data has been done with multiple discrete solutions, including Error Correcting Code (ECC) and Cyclic Redundancy Check (CRC), but there have been gaps across the I/O path from the operating system to the storage. The implementation of the T10-PI standard, ensures that data is validated as it moves through the data path, from the application, to the HBA, to storage, enabling seamless end-to-end integrity. So how great is the chance that you get a hit from Bit error ? Fibre Channel originally developed at 25 MBs; today it is at 1600 MBs. The Channel error rate is 1 in 10E12 bits. So the packet error probability is of an error every 10,000,000,000,000 ten trillion bits transmitted. This kind of inconsistency is also known as silent corruption. What does T10-PI enable? T10-PI enables the detection of an inconsistent data transmission, forcing the application to retransmit the stream of data eliminating the occurrence of a silent corruption condition. Is a Data integrity protection mechanism that spans transport and device boundaries to protect data as it is transmitted from host to storage, then backwards. Improves fault isolation by allowing each device in the data path to recognize the protection mechanism. Enhanced Robustness and Improved data integrity protection between standardized initiator and target devices
41
HP 3PAR Persistent Checksum
Ensuring E2E Data Integrity from the Host HBA Reads Writes Challenge: Media and transmission errors can be caused by any component in the I/O stack HP Solution: Industry standard end-to-end data protection via T10-PI (Protection Information) Currently supported on FC only 1) Benefit: End to End Protection for all data stored on 3PAR StoreServ systems Completely agentless, OS and application agnostic Supported on all Gen5 ASIC based HP 3PAR StoreServ systems T10 PI Host HBA 1) Data & T10-PI Data & T10-PI SAN Switch Data & T10-PI Data & T10-PI 3PAR Front-end Adapter Data & T10-PI Data & T10-PI 3PAR Back-end Adapter Data & T10-PI Data & T10-PI 3PAR Drives V59: updated the graphics and text V58: new slide T10 DIF is a SCSI standard that adds a layer of data protection to data storage by appending an eight byte record called the Data Integrity Field to each disk sector. Each DIF record contains: 2 byte CRC of the 512 bytes of disk block data 2 byte "application tag" 4 byte "block tag" or "reference tag" which is the lower 32 bits of the disk block address. Without DIF, HP/3PAR systems are vulnerable to silent data corruption within the disk drive: Without DIF, the data received from a successful disk read is assumed to be correct is passed on to the requesting host. DIF provides a method to validate the data returned by the disk drive before sending the data to the host. 1) For a list of supported T10-PI HBA see HP SPOCK
42
HP 3PAR Persistent Checksum
An extra 8 bytes Data Integrity Field (DIF) is added to the standard 512-byte disk block 2 bytes Guard field: CRC of the data block 2 bytes App field: Application specific field 4 bytes Ref: Least significant bit of a Logical Block Address (LBA) 511 513 515 519 User Data(512 Bytes) Guard (2 Bytes) App (2 Bytes) Ref (4 Bytes) DIF Field V59: new slide
43
HP 3PAR Persistent Checksum
Supported environments Host HBA SAN 3PAR Nodes Drives HBA HBA HBA Back-end T10-PI Array E2E T10-PI Host E2E T10-PI Back-end T10-PI (AKA T10-DIF) is available and always on for 7000, 10000,20000,8000 Array E2E T10-PI is available and always on for 20000/8000 Host E2E T10-PI is available for 20000/8000 and on, if a supported HBA and driver is installed Initial support for FC only Find the supported HBAs, drivers and OSes in SPOCK V59: new slide
44
Host HBA Supported at 3.2.2 GA
HBA Vendors: QLogic OS Protocols HBA Adapter Name/version ESXi 5.5 FC HP SN1000Q, P/N QW , HP SN1000Q, P/N QW , QMH2672-HP, HP P/N ESXi 6.0 QMH2672-HP, HP P/N RHEL 6.6/ 6.7 RHEL 7.0/ 7.1 Win 2012, 2012 R2 Qlogic drivers provided by ISS as part of SNAP releases. If the host side is not enabled for Persistent Checksum (HBA does not support, host driver does not support, turned off) the Customer will still have Persistent Checksum spanning the array target ports all the way to its backend drives As long as the right host driver and HBA are installed, Persistent Checksum will work, even if the host server is not an HP server.
45
Peer Persistence and Remote Copy
46
Replication Transport
Peer Persistence HP 3PAR OS support matrix Front End Transport Replication Transport Host OS FC FCoE iSCSI RCFC RCIP Windows Server 2008 R2 & Windows Servers 2012 R2 Supported Windows Cluster Windows HyperV VMWare (vMSC) RHEL 6.x HP-UX (11iv3) (CR+90 days) N/A Support (CR+90 days) Oracle RAC 11g R2 (Linux, Windows) Solaris 9,10, 11 future AIX (6.X, 7.X) Standalone and clustering are supported RHEL 6 – No KVM support
47
New Functionality for Manchester: Additional OS Support
Peer Persistence New Functionality for Manchester: Additional OS Support Red Hat OS Support RHEL 6 Stand-alone Server Cluster Use Generic_ALUA persona (2) to set up hosts Oracle RAC 11g R2 HP-UX OS Support (post GA – 30 days after controlled release) HP-UX (11iv3) Use HPUX persona (13) to set up hosts
48
Round Trip Time changes
Replication Mode/Transport Latency* Notes Sync/RCIP 5ms This has increased from 2.6ms Sync/RCF Sync/FCIP New FCIP support for SYNC Periodic/RCIP 150ms No change from current Periodic/RCFC Periodic/FCIP 120ms No change from current. Asynchronous Streaming/RCFC This will be increased in future releases. Asynchronous Streaming/FCIP 10ms Note : All values are round trip latencies Note : Latency supported for Peer Persistence Here is the limitation on RC configuration
49
Priority Optimization
50
Priority Optimization
Sub-millisecond latency Goal Storage QoS, latency goal as low as 500μs #setqos -lt 0.50ms vvset:qos.SSD #setqos -lt 500us vvset:qos.SSD #showqos vvset:qos.SSD -I/O_per_second- -KBytes_per_sec- Id Type Name QoS Priority Min Max Min Max LatencyGoal 11 vvset qos.SSD on normal 1 total
51
VVOLs
52
VVol Updates – changes in 3.2.2
+ Multiple VVol Storage Containers Multi-vCenter Support VASA 2.0 support Scalability (3000 VMs) Flashcache Capability CLI Enhancements Multi-tenancy Higher scalability Multiple VVol Storage Containers Allows provisioning of VMs into multiple/separate (storage administrator managed) VVol storage containers New CLIs to manage storage containers In 3.2.1, either the whole array, or a virtual domain, would be used to contain all VVols Migration from all-array / all-domain storage container to VV set storage containers New storage-container VV sets are created during upgrade if VP was *ever* registered in vCenter Multi-vCenter Support Allows Multiple vCenters and vCenter clusters to use VVols on the same array. In 3.2.1, only one vCenter could be registered to the array through VASA for VVol use VASA 2.0 support Allows for array discovery and monitoring using VASA 2.0 Discovery of traditional storage LUNs with VASA 2.0 capability management capabilities (similar to VVol) In 3.2.1, VASA 2.0 was only supported for VVol management Scalability (3000 VMs) Supports creating and running up to 3000 VMs simultaneously In 3.2.1, 1500 VMs were supported. Flashcache Capability Allows provisioning of VVols through SPBM based on array’s Flashcache capability In 3.2.1, array could support Flashcache, however this capability was not advertised as an array capability.
53
iSCSI
54
HP 3PAR StoreServ iSCSI Enhancements
Support for Persistent Ports “loss_sync” recovery, automatic failover in case of a failure due to loss of signal, link down between the array and the switch. Support for SendTargets discovery. Support up to 256 iSCSI initators per port on HP 3PAR 20000/8000 Support iSCSI VLAN tagging on HP 3PAR StoreServ 20000/8000 Support for Enterprise iSCSI (iSCSI over DCB/lossless Ethernet) on HP 3PAR StoreServ 20000/8000 Support iSCSI IPV6 on HP 3PAR StoreServ 20000/8000 Note: 20000/8000 come with a new CNA card that will not be qualified on 7000 and
55
SendTargets Group (STGT)
Reduces the amount of steps needed to configure iSCSI initators Return information(IP address, TCP port) about “All” relevant ports. CLI only, GUI support in SSMC 2.3 controlport stgt Current implementation Returns only the local port on which the request arrives Might be considered violation of iSCSI specification Returning all local ports is not an option New implementation using STGT New STGT field allows SAN admin to define relevance Ports sharing the same STGT are returned STGT initialized to TPGT during online upgrade Supported on all platforms To reduce the amount of configuration required on an initiator, the iSCSI protocol provides the SendTargets text request. The initiator uses the SendTargets request to get a list of targets to which it may have access, as well as the list of addresses (IP address and TCP port) on which these targets may be accessed. If the text value included in the request is set to 'All', the target is expected to return information on all relevant target ports. Our current implementation only returns the local port information, which might be interpreted as a violation of the iSCSI specification. Note that the iSCSI specification does not define what target ports are relevant. The SAN administrator will be able to create SendTargets groups to identify target ports that should be reported together. A SendTargets group is identified by a tag (STGT) assigned to each iSCSI port. Initially each port is in its own group (STGT=TPGT). The STGT value can be changed via CLI, thus combining multiple ports into the same group. When SendTargets=All is received on a port, the InServ will return all ports which have the same STGT value as the port that received the request.
56
Enterprise iSCSI GTM between StoreServ and StoreFabric to market with HP5900CP switch Better performance and small buffers on switch Lossless mode using a suite of IEEE protocols 802.1Qbb Priority-Based Flow Control 801.1Qaz Enhanced Transmission Selection & other configuration TLV(s) DCBx DCB exchange protocol based on LLDP peer exchange protocol Hagrid CNAs with newer platforms only (20000/8000) DCB id available and supported on 99% of network switches sold today
57
DCB The beauty of DCB is the flat read, write, and Duplex curves as shown in the pictures. Duplex Write Read
58
iSCS VLAN tagging Network Simplification/Manageability Security
iSCSI VLAN tagging allows iSCSI ports to be configured with multiple IP addresses and VLAN tags. Traffic to those ports is then filtered on VLAN membership. The customer benefits will then be: Network Simplification/Manageability Security Performance and Cost Note: supported on and 8000 platforms only.
59
iSCSI VLAN tagging details
iSCSI (Firmware) Inbound When VLAN tagging is enabled for an IP interface, all inbound Ethernet frames must be tagged, and the VID must match the VID configured for that interface. Otherwise, the frame is discarded. The user priority and CFI are not used for filtering. If only the user priority bits are set (that is, VID is zero), frames with or without VLAN tagging are accepted. When VLAN tagging is not enabled, the inbound Ethernet frame must be untagged; otherwise, it is discarded. iSCSI (Firmware) Outbound When VLAN tagging is enabled for an IP interface, all outbound Ethernet frames are tagged with the TCI configured for that interface. When VLAN tagging is not enabled, the outbound Ethernet frame is untagged.
60
Performance improvements with new CNA cards
20000 700 0 20000 700 0
61
iSCSI – Competitive overview vs AFA
SolidFire EMC XtremIO Pure Storage Nimble HP 3PAR Initators per Port 64 256 Initators per System N/A 128 512 Up to 8,192 VLAN Tagging Support YES NO Jumbo Frames Support Authentication YES, CHAP IPV6 Enterprise iSCSI Persistent Ports iSCSI MPIO support in OpenStack YES, KILO Express Writes Future T10-PI SmartSAN
62
Tunesys
63
Tunesys – changes in 3.2.2 Change Detail
Automatic rebalance after upgrades Tunesys will start automatically as soon as new PDs are admitted. (triggered by admithw) - Applies to (2-node) HP 3PAR StoreServ 7200/c & 8200 systems ONLY. (This will be configurable via the SSMC in later releases for other systems.) - Any existing tunesys task will be stopped & replaced by a new instance. Tune generation performance Faster tune generation for VVs of the same size and “space type” (USR, SNP) in the same CPG. Improved progress reporting A background tunesys task now reports its task “step” progress as GiB to move/GiB moved which will give a better estimate as to how far tuning has progressed. During the inter-node phase, the task log also displays the current MiB/Second tuning speed. For example, a showtask step and phase display of … Phase Step … … 1/ /65536 … 1/ Indicates that tunesys is in phase 1 of 3 (inter-node), 4096/ Indicates that that the current phase has completed MiB (4GiB) of tunes out of an estimated total of MiB (64GiB).
64
Tunesys – changes in 3.2.2 Change Detail
Improved logging (tunelog only, available via InSplore) - Tunesys now logs PD usage and cleaning state per node before and after tuning. - It also logs CPG grow parameters and an estimate on how well each CPG is balanced. - This will help troubleshooting as the information is point-in- time and not from a week-old or 2-days-later InSplore…. Warning messages e.g. degraded or failed PDs, unbalanced spindle count between nodepairs. - These are now logged to the screen even if –f is used. - Tunesys will still run with –f, but the warnings are now visible. - The warnings are toned down slightly… (“caution”) .srdata volume tuning The .srdata volume can now be a significant size (up to 1TiB) and will now be tuned by the inter-node or by tuneld phase depending on the upgrade. Limitation – Flash Cache LDs FlashCache LDs will not be tuned by tunesys They are “raw” storage, and contain no region mappings. They cannot be tuned by LD or VV tuning. If new SSD capacity is added, FlashCache LDs will need to be deleted & recreated to use the new PDs. Tunesys Best Practice Will be updated with items post release.
65
Autonomic Layout
66
Autonomic Layout Improved layout management for AFA systems
Today the Out of the Box creation will create 3 CPGs (R1/R5/R6) with default setsize values (3+1) Call home data shows that customer use the R5 CPG only We have seen that non optimal max set size being a competitive disadvantage in small configs POC, due to too system overhead (sparing + raid overhead). For new All-Flash systems (any model) that are installed on only one R5 CPG will be created : CPG name : SSD_r5 Minimal sparing configuration Auto-detection of capacity change and re-layout Suppose we started with 4 cages with 4 drives in each cage. This gives us R5(3+1). Adding 2 cages (4 drives each) then we can do R5(5+1).
67
Autonomic Layout Implementation Rebalance checks
Occur every hour. It will never reduce HA/set-size. The only possible transitions are either HA or setsize gets better while the other not getting worse. We will relax to mag availability with widest possible set-size Will just the highest availability at the best efficiency Autonomic layout vs Automatic Tunesys Autonomic layout will trigger an automatic tunesys and change the layout for the system CPG only; and is restricted to AFA configurations only. Automatic Tunesys on 7200 and 8200 will work on all tier of storage and will rebalance all CPGs without changing things like the set-size or ha; volumes will be re-striped and use existing set-size and ha.
68
Adaptive Optimization
69
AO - New Min and Max limit values per CPG
New option that will allow more control on how much Max/Min space should reside in a tier before data is moved out. The primary use case is to enable the customer to make the most of their SSD drives by defining a minimum space usage of the SSD tier CPG. Adding a explicit maximum limit also separates AO space usage from the CPG growth warning and limit which produce frequent alerts for a deliberate situation. AO will enforce the max size of a tier as the smallest of the possible limiting factors: the CPG space warning or limit, the new AO tier max, or the physical space limitation of the CPG characteristics Still prevents movement if destination CPG has high average service times. These values are optional. CLI support only, SSMC support planned in 2.3
70
How does it work? The new tier setting are visible in showaocfg.
cli% showaocfg aocfg_a --CPG--- --Min(MB)--- --Max(MB)-- -Warn(MB)-- -Limit(MB)- Id Name T0 T1 T2 T0 T1 T2 T0 T1 T2 T0 T1 T2 T0 T1 T2 Mode 2 aocfg_a a0 a1 a Balanced cli% showcpg -s a* (MB) ---- Usr Snp Adm Capacity Efficiency - Id Name Warn% Total Used Total Used Total Used Compaction Dedup 7 a 8 a 9 a cli% setaocfg –t0min 400g aocfg_a 2 aocfg_a a0 a1 a Balanced With the first command we can see that only tier0 has a minimum set. The showcpg space confirms that the tier0 CPG has 100GB used space (because earlier AO runs filled it to the level set in aocfg). The the setaocfg increases the minimum to 400GB.
71
How does it work? The AO output includes new tables to display the space usage. cli% startao –btsecs -6h aocfg_a Tier CPG Disk RAID UsedMB MinMB LimitMB AvailMB IO/GB*m_Max IO/GB*m_Avg IO/GB*m_Min Count T_IO/s T_svctms 0 a0 SSD 1 a1 FC 2 a2 NL Starting genMoves iteration: 0 Starting selMoves 1 0 busiest Balanced a0 svctms: 1.0 maxSvctms: 15 raidSvctFactor: 1.0 a1 svctms: 12.2 maxSvctms: 40 raidSvctFactor: 1.0 CPG a0 usage MiB below minimum MiB Criteria: busiest Move from tier1 to tier MiB Task: 2367, 128 moves … Waiting for move_regions taskids to complete: Tier CPG Disk RAID UsedGB MinGB MaxGB LimitGB AvailGB IO/GB*m_Max IO/GB*m_Min a0 SSD a1 FC a2 NL AO duration 50 minutes. Space moved between tiers: Tier1 -> Tier0 moved 300GB Tier2 -> Tier0 moved 32MB Current CPG space: Tier CPG Disk RAID UsedMB a0 SSD a1 FC a2 NL AO complete Now running AO with the new 400GB minimum, the highlighted fields for tier0 show that the CPG does not meet the minimum at the outset. AO will run for several iterations and move regions into tier0 in order to satisfy the minimum. The AO activity summary shows that 300GB was moved up to tier0, and the current cpg space table shows that tier0 now has 400GB of used space.
72
System Reporter
73
System Reporter New controlsr CLI command with grow and export subcommands Threshold alert criteria improvements Additional filtering SR data volume backup and restore options Hidden changes supporting expanded SSMC performance reporting interface to be equivalent to the external SR (compareby – available in SSMC – not documented in CLI)
74
Controlsr - grow Subcommand grow to increase the size of the system VV “.srdata”. Increasing the size enable proportionally longer data retention. CLI only, GUI integration in SSMC 2.3 Max .srdata size is 1TB controlsr grow <growth_size> or controlsr grow –pct <percentage> cli% showsr Node Total(MB) Used(MB) Used% FileType Count TotalUsage(MB) TypeUsed% EarliestDate EndEstimate ... hires :05:00 PDT 2 year(s), 131 day(s) from now cli% controlsr grow -pct 10 hires :05:00 PDT 2 year(s), 213 day(s) from now
75
Controlsr - export Subcommand export to export a subset of the SR data, typically for engineering analysis. Requires one of these categories: –hires, –hourly, –daily, -ldrg (The LD region stats used by AO), -ldrgsum (concise summary of LD regions stats in a format for R3Tools), or -all includes hires, hourly, daily, ldrg, and some minor extras such as the SR threshold alert criteria definitions. Other noteworthy options: -csv option to export to .csv format. Can be opened by MS Excel. Otherwise output is SQLite .db files. -save –file <location> allows the output to be copied to the remote CLI client machine. Otherwise a SP is required to “sendhome” the file to 3PAR. need to use CLI client/ssh will not work. -btsecs and –etsecs to specify a time range
76
Send SR data to STaTS mktg-eos65 cli% controlsr export -daily -btsecs -2d –csv Showtask : :45:04 PDT Updated Compressing files into exportsr_daily_ _ _174433_PDT.tbz :45:04 PDT Updated Tar complete. Total MB :45:04 PDT Completed scheduled task. On STaTS Event id: Node 2 Cust Alert - No, Svc Alert - No Severity: Informational Event time: Tue Jul 21 17:44: Event type: Request for SP to transfer a file to 3PAR Msg ID: N/A Component: Undefined Short Dsc: Request to transfer file /common/srdata_export/exportsr_daily_9 Event String: Request to transfer file /common/srdata_export/exportsr_daily_ _ _174433_PDT.tbz from node 2 Once transferred this will be available as a SYSINFO file in the STaTS NFS directory
77
Save SR data to laptop cli% controlsr export -all -btsecs -2d -csv -save -file . Task 67 started Task 67 done Starting copy from SR node 1: exportsr_all_ _ _111847_MDT.tbz to . Copied 1.3MB in 1 seconds. Clean up exportsr files ... > tar tf exportsr_all_ _ _111847_MDT.tbz exportsr_sysinfo_ _ _111847_MDT.txt srmain/ srmain/srmain.csv ldrg/ csv ldrg/ csv (45 ldrg files) ldrg/ csv aomoves/ aomoves/ csv aomoves/ csv hires/ hires/statvlun.csv hires/statrcopy.csv hires/statrcvv.csv hires/statfsfpg.csv hires/statpd.csv hires/statqos.csv hires/statcache.csv hires/ldspace.csv hires/statport.csv (csv file for each table in hires, hourly, and daily directories)
78
Threshold alert criteria improvements
-recur option: an alert will be generated if the other conditions are meet repeatedly – available also via SSMC 2.2 -recur <recurrences>/<samples> Example: a criterion that generates an alert if 8 or more PDs experience at least 50ms average service time repeatedly, happening four times in any 6 high resolution periods (5 minutes each): cli% createsralertcrit pd -hires -count 8 -recur 4/6 total_svctms>50 recur8slow_pd Delta conditions: an alert will be generated if a value has changed by the given amount since the previous sample. Prefix delta_ to any condition field. - available also via SSMC 2.2 Example: a criterion that generates an alert if a VLUN experiences a sharp increase in total IOPS from the previous hour: cli% createsralertcrit vlun -hourly delta_total_iops>200 vlun_spike Also supports negative numbers Add -critical option to createsralertcrit and showsralertcrit, refers to alert severity. - available also via SSMC 2.2 Add -comment option to createsralertcrit and setsralertcrit, visible in showsralertcrit. Add statcache category. Includes FMP statistics for flash cache. cli% createsralertcrit cache -hourly -critical fmp_read_hit_pct<10 fmp_miss
79
Threshold alert criteria improvements
Add -all and -pat options to setsralertcrit, allowing multiple criteria to be enabled or disabled together cli% setsralertcrit –disable –pat pdalert* Allow scheduling of the setsralertcrit command so that criteria can be active only at certain times cli% createsched "setsralertcrit -enable alert1" "0 18 * * *" sralert_enable cli% createsched "setsralertcrit -disable alert1" "0 22 * * *" sralert_disable cli% showsched -all Schedule SchedName File/Command Min Hour DOM Month DOW CreatedBy Status Alert NextRunTime sralert_disable setsralertcrit -disable alert * * * 3parsvc active Y :00 sralert_enable setsralertcrit -enable alert * * * 3parsvc active Y :00 Allow boolean ‘or’ in createsralertcrit conditions, using the ~ character. Boolean ‘and’ represented by comma. Parantheses are not permitted, not all logical expressions can be created. Boolean ‘and’ having higher precedence than boolean ‘or’. cli% createsralertcrit port read_iops>200,write_iops>300~total_kbps>3000 alert3
80
New Filters Add -rpm option to srstatpd, srpdspace, srhistpd, and createsralertcrit pd -rpm <speed>[,<speed>...] Limit the data to disks of the specified rpm. Allowed speeds are 7, 10, 15, 100 and 150 Add -port option to srstatrcopy command. -port <npat>:<spat>:<ppat>[,<npat>:<spat>:<ppat>...] Ports with <port_n>:<port_s>:<port_p> that match any of the specified <npat>:<spat>:<ppat> patterns are included, where each of the patterns is a glob-style pattern. If not specified, all ports are included. Assist federation data tracking with new port type: srstatport –port_type peer
81
Snapshot protection for .srdata
HP 3PAR OS allows the .srdata VV to have a snp_cpg associated. CLI only, GUI coming in SSMC 3.x. This enables snapshotting or physical copy in order to back up SR data. Then promotesv or VV copy to restore (when SR is stopped) Example: cli% setvv -snp_cpg cpg1 .srdata cli% createsv -ro srdata_backup .srdata Now if the primary .srdata VV became corrupt or had a database problem the snapshot can be restored cli% stopsr -f cli% promotesv srdata_backup Task 1089 has been started to promote virtual copy srdata_backup cli% waittask 1089 Task 1089 done cli% startsr -f
82
WSAPI – SR integration System Reporter What’s supported?
physicaldiskstatistics physicaldiskspace volumespace cpgspace vlunstatistics portstatistics cmpstatistics LD stats pertaining to CPG only URL (vstime = versus time, attime = at time) GET /systemreporter/vstime/<component>/<report identifier>[?<query expression>] GET /systemreporter/attime/<component>/<report identifier>[?<query expression>]
83
WSAPI – SR integration (2)
System Reporter (Cont’d) Examples of PD stats Query physical disk hourly performance sample data for all disks in the system GET /systemreporter/vstime/physicaldiskstatistics/hourly Query physical disk hi-resolution performance sample data for PD ID 1 GET /systemreporter/vstime/physicaldiskstatistics/hires;id:1 Query physical disk daily performance sample data for all FC and NL disks in the system GET /systemreporter/vstime/physicaldiskstatistics/daily;type:1,2 Query physical disk hourly performance sample data at current time for only FC disks in the system and group them into id GET /systemreporter/attime/physicaldiskstatistics/ hourly;groupby:id?query=“disktype EQ FC”
84
SR-On-Node failures If sampling fails it will generate and alert and in case of a critical error the data collection process could core dump (PROC=srsampler|NODE3 PROCESS_FAIL Process srsampler could not be started) In the past it has been recommended to just delete and re-create the .srdata volume: This is absolutely NOT best practice. Never wipe data if not as LAST resort and with customer consent. These issues must make it to the lab so any underlying issues are fixed in the code. Steps to recreate .srdata: • stopsr (which unmounts the .srdata volume) • Remove .srdata volume Removevv ./srdata • Recreate .srdata volume using "admithw" command
85
Updatevv
86
Updatevv changes The updatevv command updates a snapshot virtual volume with a new snapshot. The fact we remove and export again is breaking several relationships on the servers sides and vvset. HP 3PAR OS changed the behavior of the command so VLUNs are not removed and exported again. In general, hosts and applications don’t like data suddenly changing when performing I/O Always try to perform updatevv when there is no or no I/O Existing updatevv limitations don’t change Glossary for next slides 3 Phase block Pauses I/O on the VVs and flushes all data to disk VV Maps/Exception Tables Just think of this as how the data for snapshots is saved on disk. There is a lot more to it, but this presentation simplifies and combines them to this main idea. Standby mode A mode where a VV basically returns a disk reset for incoming I/O Closing a VV Removing the VV structures from the kernel Defining a VV Creating the VV structures within the kernel
87
How Does it Work (Standby and Block)
VV Maps / Exception Tables BaseVV Receive update VV for RO2 Put the snapshot in standby mode Basically returns a disk reset for incoming I/O Needed to prevent internal issues with I/O when the kernel ID is changed later Start a 3 phase block Will use the new granular 3 phase block Will only block the base VV and its snapshots RO1 RO2
88
How Does it Work (Create New Snapshot)
VV Maps / Exception Tables BaseVV Create a new snapshot (newsv) Same parent and type as RO2 RO1 RO2 newsv
89
How Does it Work (Switch Snapshots)
VV Maps / Exception Tables BaseVV Switch the exception tables between newsv and RO2 Internally this means: Temporarily closing the VV and snapshots Switch the exception tables, creation time, etc. between the two snapshots Note the two snapshots switch places within the internal RO chain Update the TOC and kernel Redefine the VV RO1 RO2 newsv
90
How Does it Work (Unblock)
VV Maps / Exception Tables BaseVV Unblock the VV I/O through RO2 will be through the updated exception tables and have different data. I/O through unrelated snapshots (RO1) will be the same data as before RO1 RO2 newsv
91
How Does it Work (Remove Temporary Snapshot)
VV Maps / Exception Tables BaseVV Remove the temporary snapshot (newsv) This is removing the old exception tables RO1 RO2 newsv
92
Gotchas Updating RW snapshots is the exact same process, except with a RW snapshot instead of a RO snapshot. The “-ro” option will update the parent RO snapshot when updatevv is called on a RW snapshot. Update process is the same as before, except the steps create and modify a RW snapshot and its parent RO snapshot during the same block. Any VLUN, VV Set, etc. configurations should be unaffected and untouched. The only thing that should change is the data, creation time, and order of the RO/RW chains If the snapshot has children, then the original updatevv implementation returns an error.
93
LDAP
94
Previous LDAP limitations
Limitations prior HP 3PAR OS 3.2.2 Only one LDAP server can be defined Configuration is difficult There is little validation that settings are correct Troubleshooting is difficult Error messages are cryptic and nonspecific Secure authentication through load balancers is not supported Several bugs in LDAP implementation
95
LDAP in HP 3PAR OS 3.2.2 New features Multiple servers
Multiple domains Ability to lookup servers in DNS Load balancer support Better Usability Easier configuration
96
Better Usability Better input validation
Expanded input validation for all unsupported configurations with meaningful error messages - i.e. “The parameters groups-dn and accounts-dn are not valid together” More meaningful error messaging The authentication process was broken into multiple parts with meaningful error messages: LDAP configuration validation LDAP server name resolution and network connectivity checking LDAP connection and binding LDAP search
97
Easier configuration The previous LDAP configuration was difficult to get correct. For an average AD implementation, the user had to look up and fill in 8+ parameters. From the user needs only the name of the domain with which they’d like to authenticate and providing the group to privilege maps for users.
98
New configuration instructions for Active Directory:
Set the Kerberos realm by issuing the setauthparam kerberos-realm <LDAP_ServiceName> command, where <LDAP_ServiceName> is the name of your domain. If unknown, find the domain name by follwing these instructions on any member of the domain: 1. Click Start -> Run. 2. In the Open box, enter ldp and click OK. 3. The Ldp window opens. 4. In the Ldp window, click Connection -> Connect. 5. In the Server box, enter the Active Directory server’s IP address and click OK. The root DSE attributes and values are displayed in the right-side pane. In ldp.exe, the Kerberos realm is the portion of the ldapServiceName value that follows the “at” sign and terminates before the semi-colon (;).
99
New features Multiple servers Multiple domains
The previous versions only allowed a single LDAP server to be defined. The new version allows an unlimited number of servers to be defined. Multiple domains The previous versions did not allow users outside of a single subdomain. The new version allows any users defined in domains that fall under the same root domain.
100
Multiple Server Multiple server support allows you to configure all of the servers within a single domain (shown by color) Multiple domain support allows you to authenticate against a user in any of the shown domains by configuring the servers at the root of the forest (shown in purple)
101
Usage Configuration instructions for use with multiple domains:
The configuration is the same as any Active Directory configuration, with a few changes: The configuration should be for the highest domain in the forest (hpqcorp in this example) If most users will be in a single subdomain, that domain can be configured with the native-domain parameter Users not in the default configured domain must be specified in either: User Principle Name Down-Level Logon Name (SubDomain\UserName) The OUs where users in each domain are defined must be configured on the array The OUs where groups are defined in each domain must be configured on the array If certificate checking is enabled, all Domain Controllers must have certificates signed by the same root Certificate Authority. DNS must be configured on the array, and the configured DNS server(s) must be able to look up SRV records for all domains that will be used.
102
Changes to LDAP - new features
LDAP servers behind load balancers Previously the array would only support simple authentication through load balancers (or any device that did NAT or translated hostnames). Now there is an ldap-type “LOAD” which will query through an intermediate device to determine the hostname of the device doing the actual authentication and use that in order to authenticate.
103
Changes to LDAP - new features
Load Balancing Multiple server support allows you to configure all of the servers within a single domain (shown by color) Multiple domain support allows you to authenticate against a user in any of the shown domains by configuring the servers at the root of the forest (shown in purple)
104
Usage Configuration instructions for use with a load balancer:
When configuring the array to communicate with LDAP servers behind a load balancer (as indicated by the parameter ldap-type being set to LOAD) the administrator must set the other parameters to the correct values. Since querying DNS can easily result in the names of the actual servers, the hostnames of the load balancers themselves should be entered (although IP addresses can also be used). The array will determine the server that it is using for authentication. The persistence must be set so that multiple successive connections are routed to the same server (within a few seconds).
105
Miscellaneous changes in 3.2.2
Oracle VM support 3.2.x, 3.3.x (post GA) We have increased the max snaps per VV tree to 2048 Prior it used to be 512 VVsets will allow to add a volume when non contiguous LUN IDs are not available When adding a VV to the already exported VV set, we create gaps in the VVset "--" until next free LUN is found and this new VV is exported using that LUN. Initially without this enhancement, a VV can be added only when the LUN is contiguous and if we try to add it results in conflict. Checking network health using checkhealth no longer causes network ports to flap. Conversions, online copies, online promotes, and imports were changed in to only block the relevant VV trees (base VV and its snapshots). UpdateVV also only blocks the relevant VV trees. Statvlun/Statvv support the following vvol filters: -p -vmname -vmid -vmhost -vvolstate -vvolsc
106
Call to Action
107
Call to Action 3.2.2 is another amazing release that extends our data service leadership in the industry. Show our customer how we bring great SW features to new and existing platforms in every release. Focus on what we have, not what is missing. We will cover minor and major gaps in upcoming releases. iSCSI changes offer a great opportunity for us to extend in that market, especially in the all flash segment. Lead with our Federation story, it’s an amazing differentiator. More to come in the 8/27 tektalk.
108
Thanks!
109
Host Explorer/3PARInfo/PowerShell
110
3PAR Host Explorer 3.2.2 Support Manchester Support
HE Integration with SPP(Service Pack for ProLiant) for Windows Server Host Operating System Additional Host OS supported in Manchester VMWare vSphere 5.5 VMWare vSphere 6.0. RHEL 6.0 (new updates) RHEL 7.0 Helion OS SLES 12 Support Additional Tokens Host Consumed Capacity Enhancements/Defect Fixes HE should support current-2 versions of 3PAR OS i.e. should be backward compatible to previous 3PAR OS version UI Enhancements Documentation updates(Restructured Support Matrix)
111
3PARInfo 3.2.2 Support Manchester Support New Host OS Support
RHEL 6.4, 6.5, 6.6 RHEL 7.0 SUSE 12 VMWare ESXi 6.0 Support of new 3PAR Hardware Platforms(3PAR 8000 and 20000) Support for TDVV Enhancement/Defect Fixes 10 Digit Serial Number Backward Compatibility with Older 3PAR Operating System Modify install behavior to update messages when 3PARInfo is already installed Improved installation files and scripts 16 G FC Support
112
3PAR PowerShell Support
Manchester Support 3PAR OS Support 3PAR 8000/20000 Support Additional 3PAR CLI support via PowerShell Customer Feedback Incorporation/Defect Fix
113
Interoperability
114
3PAR OS 3.2.2- Interoperability at GA
Microsoft Windows Server 2012, R2 (Includes Hyper-V) + MSFC SF 6.x+Clustering* VMware VMware vSphere 5.0, VMware vSphere 5.1, VMware vSphere 5.5 VMWare vSphere 6.0, includes Vvol & VASA2.0 VMware Cluster VMware Metro Cluster ESX 6 Unix AIX 6.1 7.1 Power HA 6.1, 7.1 VIO Server x, x, x HP-UX 11.23 (Itanium, PA-RISC) 11.31 (Itanium, PA-RISC) Service Guard (MSTS process) HPVM SF 5.1, 6.0.x (HPUX only) + SF Clustering* Linux RHEL RHEL 5.x+Clustering RHEL 6.x+Clustering RHEL 7.x+Clustering RHEV RHEL KVM 5, 6, 7 HP SG/LX Clustering SF 6.x SLES SLES 10 SP4 SLES 11 SP3 + HA Ext and KVM Ext SLES 12+ HA+KVM Citrix Citrix XenServer 6.2, 6.5 Citrix Clustering OL Oracle Linux 5, 6, 7 SF 6.x + SF Clustering
115
3PAR OS 3.2.2- Interoperability post GA
Microsoft WS 2008/R2 WS 2003 VMware MS Failover Clustering Mac OS 10.7, 10.8, 10.9, 10.10 OpenVMS on Integrity 8.3/8.3-1H1 8.4 Appliances Cisco UCS NetApp HP Helion Support Oracle VM 3.2.x, 3.3.x Unix Solaris Sol 10 + SC 3.3 (x86/SPARC) Sol 11, SC 4.0, 4.1 (x86/SPARC) Linux Oracle Linux UEK 5, 6, 7(Unbreakable Enterprise Kernel) HP SG/LX Clustering Ubuntu 12.04 LTS 14.04 LTS SLES SLES 11 SP4 SLES 11 SP3 + HA Ext + HP SG/LX Clustering Storage Foundation AIX, SLES, Solaris Grey Zone AIX AIX 5.3 TL12 SLES SLES 10 SP4 SLES 11, SP2 HP-UX 11.11 SLES 11 SP1, SP2 VMWare VMware ESX 4.0/4.1 DER ONLY AIX 5.3 TL 10, TL 11
116
Windows Server 2008, R2 (Includes Hyper-V) + MSFC
FCoE initiator to FCoE target Microsoft Windows 2003, R2+MSCS Windows Server 2008, R2 (Includes Hyper-V) + MSFC Windows Server 2012, R2 (Includes Hyper-V) + MSFC VMware VMware vSphere Clusters VMware vSphere Clusters VMware vSphere (ESX) Clusters VMware vSphere (ESX) Clusters Linux RHEL RHEL 5.x+Clustering RHEL 6.x+Clustering RHEL 7.x+Clustering Software iSCSI Windows 2008, R2 (Includes Hyper-V) +MSFC VMware vSphere 5.1+ Clusters VMware vSphere Clusters VMware vSphere Clusters Unix Solaris Sol 10 + SC 3.3 Sol 11, SC 4.0, 4.1 RHEL KVM 5, 6, 7 Oracle VM OL 5, 6, 7 UEK 5,6, 7 SLES SLES 10 SP4 SLES 11, SP1, SP2, SP3, SP4 + HA Ext SLES 12 Ubuntu 12.04 LTS 14.04 LTS Helion Citrix Citrix XenServer 6.0, 6.1, 6.2, 6.5 Citrix Clustering Hardware iSCSI OL 5, 6, 7 FC initiator to FC target UEK 5, 6, 7 SLES 11, SP1, SP2, SP3, SP4 + HA Ext AIX 6.1 7.1 HP-UX 11.23 (Itanium, PA-RISC) 11.31 (Itanium, PA-RISC) Mac OS 10.7, 10.8, 10.9, 10.10 OpenVMS on Integrity 8.3/8.3-1H1 8.4 Citrix XenServer 6.2, 6.5 Ubuntu(DER Only) Appliances NetApp FCoE initiator to FC target Cisco UCS
117
VVOLs
118
Multiple VVol Storage Containers
All Storage Containers in the domain, or “root” domain, have the same CPGs available to them. Meaning, all storage containers in the same domain/root-domain will have the same capabilities available to VVols created in those storage containers. Migration: For each domain and “root domain” that was previously visible to vCenter as a storage container, VVol migration will create a new VV set that represents that storage containers. Any VVols found will be placed in those storage-container VV sets Limits: Any host can mount any storage container in the same domain as the host (domain sets not supported) No hard-coded limit to the number of storage containers. vSphere may have limits.
119
Multiple VVol Storage Containers
New setvvolsc CLI to turn a VV set into a storage container -create : Converts a VV set into a VVol storage container -remove : Remove VVols from a VVol storage container VV set and removes the VV set. New showvvolsc to show storage containers, # VVols and allocated space. s232 cli% showvvolsc (MB) Name Num_VMs Num_VVols In_Use Provisioned vasavvset vvolsc vvolsc total s232 cli% showvvolsc -listcols InUse_MB,Num_VMs,Num_VVols,Provisioned_MB,SC_Comment,SC_CreationTime,SC_Domain,SC_Name,SC_UUID,Total_MB,VV_SetId
120
Multi-vCenter Support
Allows Multiple vCenters and vCenter clusters to use VVols on the same array. Updated CLI to allow VASA Provider certificate management, including PKI integration. setvasa –certmgmt server Selecting “server” (new default) allows VP to “own” it’s certificate, including using a PKI certificate Use createcert, importcert, showcert and removecert to manage the VASA Provider’s certificate Selfsigned: createcert vasa –selfsigned … PKI: createcert vasa –csr importcert vasa … setvasa –certmgmt client Allows management of the VPs certificate by vSphere, signed by the VMCA. In 3.2.1, only vCenter (VMCA) could manage the VP’s certificate and thus only one vCenter could be registered to the array through VASA. Migration: Version used implied “client” certificate management. However default will change to “server” If the VASA Provider ever registered, default for that array will remain “client”
121
VASA 2.0 support Similar to VASA 1.0 (see previous Recovery Manager for VASA presentation) Exposes array’s hardware, storage LUNs and capabilities that are exposed to the vSphere environment. Monitors above for changes (events) or issues (alarms). Displayed in vCenter web client user interface. Applies VASA 2.0 capabilities to traditional storage LUNs, similar to how VVol storage profiles are used
122
Flash Cache – exposed as SPBM capability in vCenter
123
CLI enhancements for VVol in 3.2.2 (summary)
Updated showvvolvm CLI Support for additional command-line options, including: Display of vSphere storage profile constraints Display bound, running (new) and unbound state Support for VM display through pattern matching. Requires storage container name VMware Volumes management Allows to operations on VMware virtual volumes (a.k.a VVols, bulk VVs [generic term]) Some CLIs are modified to view VVols and their statistics Some CLIs are modified to view statistics of vLUNs of VVols (a.k.a subluns)
124
Updated showvvolvm CLI
Support for additional command-line options, including: -sc: Display’s VMs for a particular VVol storage container (REQUIRED OPTION) showvvolvm –sc sys:all can be used to show VMs in all accessible storage containers. -sp: Display of vSphere storage profile constraints -binding: Display bound, running (new) and unbound state -p -vmid, -vmhost, -vmstate: Support for VM display filtering through pattern matching. -summary: Displays summary VM, VVol counts and space usage information. -showcols/-listcols: Display specific fields for VVol-based VMs Container, ContainerID, CreationTime, DomainID, GuestOS, Last_Host, Last_Pwr_Time, Last_State_Time, Logical_MB, Num_snap, Num_vv, Physical_MB, SnpCPG, SnpCPG_ID, SP_Constraint_List, SP_ID, SP_Name, UsrCPG, UsrCPG_ID, UUID, VM_Name, VM_State -d: Show additional details: UUID, Num_snap, UsrCPG, SnpCPG, Container, CreationTime -domain: List VMs in a particular domain
125
How does it Work? showvvolvm examples cli% showvvolvm -d -sc sys:all
------(MB)------ VM_Name UUID Num_vv Num_snap Physical Logical GuestOS VM_State UsrCPG SnpCPG Container CreationTime vm d-909a-2b8a-46b4-eab6d6420b debian7_64Guest Unbound FC_r _Root :40:45 PDT vm b42-c2dc-42b4-31aa-8d ea debian7_64Guest PoweredOn FC_r _Root :59:37 PDT 2 total s073 cli% showvvolvm -sc sys:all -d -p -vmstate Bound 1 total s073 cli% showvvolvm -sc sys:all -binding VM_Name VM_State Last_Host Last_State_Time Last_Pwr_Time vm2 Unbound dl120g :23:16 PDT :40:45 PDT vm1 PoweredOn dl120g :39:34 PDT :39:36 PDT 2 total s073 cli% showvvolvm -sc sys:all -sp VM_Name SP_Name SP_Constraint_List vm2 VVol No Requirements Policy SpaceEfficiency=Thin vm SpaceEfficiency=Thin
126
Updated showvv, histvv, statvv, histvlun, statvlun CLI
Support for additional command-line options, including: -p –vmname, -vmid, -vmhost, -vvolstate, -vvolsc : Support for VV/VLUN display of particular VMs through pattern matching. Advantages of additional options Storage administrator will be able to manage bulk VVs (VVols). Pattern Matching options will also help VVols to be handled more effectively by the end- user (storage administrator)
127
How does it Work? showvv examples cli% showvv -p -vmname scalability
--Rsvd(MB)-- -(MB)- Id Name Prov Type CopyOf BsId Rd -Detailed_State- Adm Snp Usr VSize 8 cfg-scalabil-dc72d4dd tpvv base RW normal 9 dat-scalabil-2d1ad662 tpvv base RW normal 2 total s073 cli% showvv -p -vmid 503b57c bb1-92ca-542dd0e604c5 s073 cli% showvv -p -vmhost dl360g7-133 1 total s073 cli% showvv -p -vvolstate bound
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.