Download presentation
Presentation is loading. Please wait.
1
PowerHA for IBM i and more
Steven Finnes 170579 37AC
2
Session Objectives PowerHA high level architectural concepts and objectives PowerHA hardware construct PowerHA logical construct Configurations in production today around the world HyperSwap Live Demo ! Additional resources
3
PowerHA–architectural constuct
PowerHA key design points: Multi-system shared storage clustering –uniform multi- system resource management for consistent HA/DR outcomes Integration – is an integrated extension of the OS and storage Isolation of application data – enables shared storage and storage based replication clustering Simplicity & automation – requires fractional staffing and consumes virtually no production compute capacity Multi-site – one ‘pain of glass’ addressing all outage types, planned and unplanned, datacenter and disaster Storage management – an implicate extension of the PowerHA cluster management (integrated with BRMS for automatic off line backups) Jay Kruemcke IBM 2003
4
PowerHA SystemMirror shared storage cluster
Benefits: Automated Fallover Non disruptive upgrades PTFs Application Monitoring Implemented in SLIC Event / ERROR Notification GUI, CL mgmt Redundant LAN Server A Server B simple two node cluster configuration Different Failure Types: IP Loss Detection Loss of Storage Application Interruption Server Crash Loss of Power Redundant SAN Configuration Options Active / Passive RPO of 0 (use local journaling) Storage Enclosure
5
PowerHA on IBM i basic concepts
Admin domain Monitored objects Monitored objects monitored resources SYSBAS SYSBAS IASP aka (volume group) Application data, local journals.. Think of the administrative domain as type of registry, its not a mechanism for physically replicating objects but rather to keep the monitored objects required by the applications within the cluster to func Sysbas contains system related objects: Load Source Unit (LSU) System License Internal Code (SLIC) IBM i Operating System Licensed Program Products System configuration objects User profiles System Values IBM “Q” libraries (QSYS, QGPL, etc.) QTEMP library IASP contains data and application related objects: Critical business data Application run-time objects (supported in an IASP) The IASP is a separate database from the system database tion on each node in the cluster. PowerHA SystemMirror creates and manages a shared storage cluster topology IASP volume group hosts the DB, IFS data, and local journals Admin Domain monitors and maintains CRG consistency Note that the foundational topology does not involve replication Logical replication is not required to keep the monitored objects in sync
6
LUN Level Switching
7
LUN Level Switching
8
LUN Level Switching
9
LUN Level Switching Data is switched between servers
Provides protection against software and server hardware outages Switch requires a vary off and a vary on of an Independent ASP
10
LUN Level Switching Data is switched between servers
11
LUN Level Switching Data is switched between servers
Provides protection against software and server hardware outages
12
LUN Level Switching Data is switched between servers
Provides protection against software and server hardware outages Switch requires a vary off and a vary on of an Independent ASP
13
PowerHA for i storage configurations
Application Example: high availability configuration, PowerHA Standard Edition cluster Application Use PowerHA Enterprise Edition when your using Metro Mirror, Global Mirror or geomirroring sync mode, Global Mirror Metro Mirror PowerHA Enterprise Edition Cluster
14
unified two-site HA/DR solution
Requirements Multi-site configuration Data center/campus component for HA Second site for DR Solution Strategy: Unified clustering solution for data center & multi-site resiliency PowerHA SystemMirror Enterprise Edition Application Question #1 Regarding your first question, as you had mentioned, I urge all customers to use journaling for their system. Even if the customer has no plans to implement an HA or DR solution. Without journaling, if you have a crash with main store loss, you could be exposed to data integrity issues. This is why journaling was created in the first place, and it is a separate discussion from an HA solution. Once we have that out of the way, let's take a look at how your recovery point works with PowerHA. Planned outages In all cases with PowerHA if there is an planned outage, we ensure that you lose no data (an RPO of 0). The PowerHA product ensures that memory is flushed, all data is transferred if it is replicated, and therefore when coming up on the other side it should be no different than a planned IPL (only much shorter since it is just the IASP instead of the entire system). Unplanned outages - with an HA solution With unplanned outages, we can split everything up into two instances: HA solutions, and DR solutions. Our HA solutions are our LUN Level Switching, MetorMirror, and Synchronous Geographic Mirroring. In any of these instances when there is an unplanned outage of the IBM i, the data is the same regardless of which system it is on, and the IASP comes online on the secondary system in the same way it would if we had a single system and tripped over the power cord and had to plug it back in. (this is where the journaling and comittment control discussions are important, but they are important for a single system and just as necessary/required for logical replication as well). The point is, you lose no more data than if you just accidentally flipped the circuit breaker. Unplanned outages - with a DR solution Our DR solutions are: GlobalMirror and Asynchronous Geographic Mirroring. When there is an unplanned outage, say the entire data center, you will lose whatever hasn't made it across to the other system. The amount of exposure is relative to the bandwidth, distance, and workload on the systems. I have seen customers with seconds, and others with minutes. We provide the ability to see how much data you are behind by on our display screens. If someone is using asynchronous remote journaling (a logical replication solution), you also lose whatever hasn't made it across to the other system. Where PowerHA can outshine logical replication in this regard is with external storage. If it is an unplanned outage of the production IBM i (hardware or software), but the storage is still up and working, the storage replication continues. PowerHA will wait for this replication to catch up before failing over to the other side, making this the same as the synchronous solutions. Combining HA and DR There are many customers who wish to combine the advantages of an HA solution, with a DR solution for further distances. A very common solution for doing this is LUN Level Switching combined with Global Mirror which is pictured below (I threw in FlashCopy as well in my picture which is typical in the below environment. This gives you HA for many types of outages, while also providing you with DR for a more severe data center outage. One step further beyond this is using lab services with Metro Global Mirror (not pictured). Question #2 With regards to your second question, please refer to my statements above. If there are customers that are still concerned, they can always reach out to me, and we can work through their specific concerns including the necessary IBM i PowerHA developers, journal developers, and storage management developers to ensure their peace of mind. The unique quality of PowerHA is that it is developed and supported by IBM, not only by the PowerHA team, but also indirectly by other IBM i development teams. This close collaboration is to ensure that any solutions we come out with are always in the best interest of the IBM i platform, customers, and their data. Summary Any time a customer is looking at an HA and DR solution, the most important thing to consider is their business goals around data availability, SLA's, and disaster expectations. Only once you have the goals can you look at which solution will be right for them. A session I had at COMMON, titled Minimizing Downtime on IBM i with External Storage, was talking about the different hardware replication and switching solutions and how the fit to these goals. As far as if a hardware or logical replication solution is right for a customer, they need to take the same look at their goals and weigh in the benefits and drawbacks of each type of solution in order to find the one that is right for them. Matt Staddler has done a few great presentations at COMMON in an unbiased way, weighing the benefits and drawbacks of each type of solution. If you would like more information on some of the benefits and drawbacks of each type of solution, I can elaborate on some of them. I can assure you that we have many PowerHA customers that have had unplanned outages that are very happy, and confident in their HA and DR solution. A PowerHA solution is also trusted and used by the IBM i development lab for some IBM i systems supporting development processes used by the entire IBM i development organization; this is something we only do because we are confident in PowerHA's ability to protect our data. Metro Mirror or Global Mirror
15
PowerHA cluster: LUN Level Switching plus GlobalMirror for DR
16
PowerHA + flash copy to minimize backup window
IASP Prod LPAR FlashCopy IASP FlashCopy LPAR FlashCopy is an integral part of a PowerHA on IBM i cluster It effectively eliminates your backup window Full system flash copy is also an option though not as seamless as with IASPs
17
Automated PowerHA/FlashCopy BRMS backup-ups
Application Global Mirror Metro Mirror Flashcopy function is an integrated extension of the PowerHA cluster This is one of several PowerHA /FlashCopy/BRMS configuration options Suspend replication, flash and resume Fully automated capable
18
PowerHA V7.2 universal FlashCopy target partition
FlashCopy targets Production cluster 1 IASP Target Partition Production cluster 2 IASP Tape Backup Production cluster 3 IASP Enables use of one partition to save multiple production environments Allows attachment of an IASP to a partition not in the cluster device domain Only one IASP can be attached to the partition at a time Eliminate dedicated flash partitions per cluster
19
DS8000 three site PowerHA on i multi-target cluster
Application Site two Metro Mirror Global Mirror Site one Site three PowerHA Enterprise Edition multi-target cluster Three systems, three sites, three copies of data in a singel cluster Metro Mirror portion of cluster provides two synchronous copies Global mirror link provides the disaster recovery systems
20
PowerHA DSK HyperSwap cluster “under the hood”
*SYSBAS Metro Mirror *SYSBAS Virtual IASP IASP MetroMirror IASP *SYSBAS MetroMirror *SYSBAS HyperSwap HyperSwap has the effect of making the replicated pair of IASPs appear as a single virtual IASP HyperSwap switches the source and target IASP and SYSBAS in the event of a storage outage The source IASP is mirrored to the target IASP via Metro Mirror The source and target SYSBAS data are mirrored via Metro Mirror In the event a storage outage event, the source system switches to the mirrored IASP and the mirrored SYSBAS In the event of a production server outage, PowerHA conducts a failover to the target production server, (the virtual IASP is switched to target) Metro Mirror will reverse direction of replication and production resumes on the secondary power server If VIOS is deployed, LPM can be used for firmware updates, load balancing etc DS8800 and above (TPC-R not utilized)
21
Three site PowerHA multi-target cluster with HyperSwap
HyperSwap Section Application Metro Mirror Global Mirror Site one Site two Site three Three site multi-target PowerHA Enterprise Edition cluster with HypeSwap No disruption due to DS8K outage in the HyperSwap pair IBM PowerHA System Mirror for i Planning Insights: IBM plans to introduce the capability to add a third system connected to the PowerHA for i HyperSwap pair via either a Metro Mirror or Global Mirror link. IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.
22
PowerHA for i SVC HyperSwap clustering
Announce Oct 11, GA Nov 11 PowerHA SystemMirror for i V7.2 TR 5 & V7.3 TR 1 Continuous two-site SVC storage availability within a PowerHA cluster PowerHA switchable LUN HA cluster configuration for production outages Note: no Global Mirror link support ! Virtual IASP HyperSwap Support for IBM SVC and IBM Storwize HyperSwap. HyperSwap provides near zero downtime for storage system outages. Previously, HyperSwap was supported with the IBM SAN Volume Controller and IBM Storwize family of products on IBM i at a Full System level. With IBM i 7.3 TR1, support has been extended to include the ability to use SVC HyperSwap with Independent ASP (IASP) technologies, including FlashCopy and LUN Level Switching. For more details on usage and restrictions, go to the PowerHA SystemMirror for i Technology Updates website. New HMC interfaces for Advanced Cluster Node Failure Detection.The IBM i cluster technology has the ability to make use of user registered cluster node monitors to determine whether a system has truly failed when communication to the system is lost. A user registers a cluster node monitor on a node, and the node registers a handler with the HMC. If the node that is monitored by the HMC suffers a failure or outage, the HMC notifies the monitoring nodes of the nature of the failure so that cluster resource services can take the appropriate action based on the type of failure, rather than resulting in a cluster partition failure condition that needs manual intervention. The HMC is being updated to replace the existing interface with a new representational state transfer (REST)-based interface. HMC version 850 (V8R8.5.0) is the last version of HMC to support the older interface and the first version to support the REST interface. When upgrading to HMC version 860, or later (V8R8.6.0), all IBM i partitions that were using the Advanced Cluster Node Failure Detection function must be updated to use the new REST-based interface. The new REST-based interface is provided for IBM i 7.3 TR 1, and IBM i 7.1 with the latest PowerHA PTF Groups. For more information on the PTFs required, as well as usage and steps for updating to use the new REST interfaces, go to the PowerHA SystemMirror for i Technology Updates website.
23
PowerHA – geomirroring – HA/DR clustering
Monitored SYSBAS objects Admin Domains synchronization Monitored SYSBAS objects production partition target partition IASP IASP DB2, IFS, journals DB2, IFS, journals geomirror PowerHA geomirror cluster(typically with internal disk and < 2 Tbytes (although there is no hard limitation ) Memory pages are replicated via IBM Ii mirroring to local and remote IASPs in real-time Note that the local journals are amongst the types of data being paged to the IASP and replicatied Complete HA/DR coverage for all outage types (hardware, middleware, operator error) Off line back-up followed by source side /target side tracking change resynchronization (think about a V5000 with flash copy at the target site ! ..zero resync time after a save operation) Both bandwidth and network quality are important. Synchronous mode up to 40 KM, production and target always identical Asynchronous mode unlimited distance, production and target ordered and consistent
24
IBM i Geographic Mirroring
PROD (source) HA (target) Detach with tracking LPAR-1 LPAR-1 No data replication during backups X Partial resynch X No HA or DR until resync completes SYSBAS Your Network SYSBAS IASP IASP Limited use for on-line backups Detach with Tracking Replication from source is suspended, changes are tracked Requires partial resynchronization once backups are completed No HA or DR failovers are possible until that re-sync has completed Will this meet your business requirements? By itself, can be a viable on-line backup solution, if full time HA/DR is not required. Otherwise, consider the latest version of Save-While-Active or better yet implement SAN storage and use flash copy
25
Geomirroring – i hosting i remote VM restart for DR
client partition i host partition Admin domain* IBM i hosted partition NWSSTG i host partition IASP client partition Applications IASP NWSSTG IBM i hosted partition geomirror * Admin Domain on host is optional Not a PowerHA clustering configuration rather “full system replication’ of the VM hosted client This is disaster recovery setup not a high availability solution Benefit: easy setup, migration of the DB to an IASP not required IBM client placed into a network storage space which is placed into an IASP Guest and host partition must be shut down before remote host and client can be restarted Limitation: no heart beating, can’t do concurrent OS upgrades, is more resource intensive than a PowerHA cluster Note that everything is being replicated so network bandwidth and quality is critical Any roll over or failover requires an abnormal IPL of target
26
How about acquisition price and then SWMA renewal price ?
$ 10,200 for a complete HA/DR solution ? Production server S814 has two core, CBU server, S814 has one core. (geomorring uses less than 10% CPW) Note: logic replication requires substantial CPW overhead requiring permanently licensed IBM i cores on the CBU Production site and DR site are 400 miles apart therefore we will need geomirror async mode which requires the Enterprise $34,00/core In this case the customer needed 100Mbits of bandwidth (lab services will do the sizing for bandwidth). So let’s do the math to see what this HA solution cost us: $3400x3 = $10,200 $10,200 for the complete PowerHA, geomirroring is part of the OS therefore no extra licensing cost Comparison shop for price, swma renewals, support , and your ongoing dedicated staffing to manage SWMA renewal cost = 20% of list or $2,040 (first year included in initial price) Check into the staffing overhead required to manage a logical replication solution compared to a PowerHA cluster © Copyright IBM Corporation 2014
27
PowerHA Price Example…Economic Value - TCA
PowerHA is priced per processor core used in the HA/DR cluster Taking advantage of the CBU topology in the example topology: Assume 880 cluster, 7 iOS and 7 PowerHA PowerHA price: 5,250/core = $36,750…..savings: $42,000 IBM i price: 59,000/core = $413,000…..savings: $472,000 Total savings….$514,000 © Copyright IBM Corporation 2014
28
Two customer’s fresh from logical replication to PowerHA
PowerHA with geomirroring “Thus far the amount of maintenance time we spend on this is so small compared to ABCDFG check this once a week and it takes perhaps 5 minutes. With ABCDFG it would take a typical one hour every day. What a difference.” PowerHA with V7000 “In the past, our operators disliked performing switch tests during DR testing: it was a lot of difficult work and it was unreliable,” recalls Alec Marsh. “By contrast, PowerHA is easy to manage and enables us to role-swap between our servers quickly and efficiently, so we’re more confident and can run a true ‘stretched’ data centre rather than active/passive sites.”
29
Demo PowerHA failover demo
30
GDR overview A simplified way to manage DR Automated DR management
Improved economics by eliminating the need for hardware and software resources at the backup site Easier deployment for disaster recovery operations; unlike clustering or middleware replication technologies, VM restart technology has no operating system or middleware dependencies. VM restart control system (KSYS) Site 1 System 1 Site 2 System 2 Restart VM 1 Restarted VM 1 VM 1 Replication This slide includes an information graphic illustrating how the VM restart control system (KSYS) oversees restart and replication of virtual machines from one site to a second site. Support for IBM POWER7® and POWER8® Systems Support for heterogeneous guest OSs AIX IBM i (GA July) Red Hat SUSE Ubuntu
31
Introducing: GDR for Power Systems
Original Announce : Oct 11, 2016, (show 2 min video) Generally Available : Nov. 18, 2016, Enhancements planned for 1H 2017 & 4Q 2017 Delivered as part of GTS Resiliency Service Offering New automation S/W – one time charge, priced per h/w core (only those in VM restart partitions) Installation services, Software maintenance both Power & GTS BP’s & Distributor’ enabled to sell: April 18, 2017 Three Deployment Models On Customer Premise – initial release DR as a Service : IBM Resiliency Services provides DR infrastructure ~ 2017 MSP DRaaS providers Contacts: Dave Clitherow /UK/IBM – GTS Global Offering Mgr. Vinay Kumar VS – GTS Global Offering Mgr. Bangalor India Functional ID for additional assistance -
32
GDR overview A simplified way to manage DR Automated DR management
Improved economics by eliminating the need for hardware and software resources at the backup site Easier deployment for disaster recovery operations; unlike clustering or middleware replication technologies, VM restart technology has no operating system or middleware dependencies. VM restart control system (KSYS) Site 1 System 1 Site 2 System 2 Restart VM 1 Restarted VM 1 VM 1 Replication This slide includes an information graphic illustrating how the VM restart control system (KSYS) oversees restart and replication of virtual machines from one site to a second site. Support for IBM POWER7® and POWER8® Systems Support for heterogeneous guest OSs AIX IBM i (GA July) Red Hat SUSE Ubuntu
33
K-Sys: C(K)ontrol System LPAR
K-sys: Controller System: AIX LPAR that orchestrates the DR operations Alerts administrator about key events Administrator initiated DR automation Scripting support: Daily validations & Event notifications Site 1 Site 2 K-sys Controller System LPAR Networks … … … Storage Mirroring … HMCs, VIOSs, LPARs (VMs), Storage HMCs, VIOSs, Storage
34
GDR for Power Systems – how it works
The storage subsystem at the backup host is prepared and mapped to VIOS VM1 and VM2 are booted up VMs from site 1 are now restarted on the backup host in site 2 The underlying mechanism that enables this to happen is the KSYS orchestrator at site 2 From a customer perspective, this operation is accomplished with a single command
35
GDR product licensing example
GDR for Power Systems Software tier small processor group/core medium processor group price/core List price for managed cores $1020 $1575 Linux AIX AIX Licensing structure: no charge base PID registered to KSYS server Two tier features: small or medium One quantity feature = # of processor cores to be restarted IBM i Linux 4 systems: 2 production, 2 DR systems DR system must be = or less than production site server processor group (or software tier) Production site: 2 systems, 5 VM partitions, 14 production cores, 14 AIX LPPs, 14 PowerVM LPPs DR site: 2 recovery systems, 5VM partitions, 14 DR cores, one AIX LPAR for Ksys DR site: 1 KSYS system, 1 AIX, 1 Base PID, tier feature = small, quantity feature = 14 restart features => $14280 USD $23,886 = $38,166 (US prices & subject to change at the discretion of IBM) Implementation service package options: 80 hours, 120 hours, 240 hours > 240 hours # of restart features = number of cores to be restarted 80, 120 & 240 hours $23,886, $35,828 and 71,657 U.S. prices which can vary by geo and subject to change an any time
36
GDR Pre-requisites Guest OS in VMs AIX: V6 or later IBM i (June 2017)
Linux: RedHat(LE/BE): 7.2 or later SUSE(LE/BE): 12.1 or later Ubuntu: 16.04 VIOS VIOS (2016) HMC V8 R8.6.0 (2016) EMC Storage VMAX family, Solutions Enabler SYMAPI V , PowerPath IBM DS8K SVC, Storwize June 2017 KSYS LPAR AIX 7.2 TL1 1 2 3 4 5
37
GDR for Power Roadmap Nov Aug 2016 2016
Beta Release Early prototype GA Release 1.1 GA Release SP: GA Release GA Release Support for P7 and P8 Systems Support for vSCSI, NPIV EMC SRDF Async Capacity management Admin controlled recovery IBM i Support for other storage replications: SVC/Storwize DS8K EMC Sync Advanced DR policies (Host Groups, etc) Failover Rehearsal (DR Test) Hitachi Mirror support VLAN per site support Support for sub capacity LPAR DR start Statement of direction As part of the Resiliency portfolio IBM will look to continue the integration of GDR into our Disaster Recovery as a Service offering providing increased value to our client base. IBM intends to add support for additional IBM and OEM storage platforms. IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion. Information regarding potential future products or services is intended to outline our general direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products or services is not a commitment, promise, or legal obligation to deliver any material, code, functionality, or service. Information about potential future products or services may not be incorporated into any contract. The development, release, and timing of any future services or features or functionality described for our products remain at our sole discretion.
38
IBM i Cloud Storage Solutions for i
TCP/IP SoftLayer, AWS, Private Cloud Strategic Features (phased in over three GA cycles) Utilize Standard Cloud Object Storage Support for IFS File Mirror/Sharing Virtual tape with BRMS for backup recovery GUI management Secure connection with compression API for Hybrid Cloud capability Enablement for dedicated cloud MSPs Exploitation of IBM storage and solutions such as Spectrum expanding the market to medium and large enterprises Value Proposition Do it yourself backup/archive operations to a public cloud Gets data off site automatically Eliminates the need for a full service 3rd party Enables easy to use recovery operations Enables file sharing Advanced backup recovery services to dedicated MSPs Can eliminate the need for a local tape device Initial Target Market: 1&2 Core IBM i systems with 1 TB or less of Storage to back up Product is GA Oct 28, supports V7.1 and later
39
Cloud Storage Solutions for i
Virtual Tape Softlayer & Amazon, TCP/IP Cloud Storage Solutions for i is an API that enables deployment of IBM i data to a public cloud Initially targeted for customers with under 1 Tbyte of data, with the support of the S3 API, Cleversafe is supported, however at this time BRMS doesn’t support cloud object archiving Currently supported interfaces: Swift and S3 Initial product offering will feature Turn-key BRMS setup and run with virtual tape management Security initially via VPN Auto save and synchronize files in the IBM i IFS directory (future plan) Roll your own backup/recovery (bandwidth considerations)
40
Cloud storage – cached backup IBM i environment
Virtual Tape public or private cloud supporting either the S3 APIs or the SWIFT APIs TCP/IP Foundational topology is enabled via Virtual Tape Physical storage cache via a disk pool Data is saved from i as tape objects into the storage cache Tape objects in the storage cache are converted to cloud objects (objects are containers recognized by cloud provider) Cloud provider has an object format (SWIFT or S3) enabling saves to generic disk of any kind To deploy to the cloud, Cloud Storage Solutions groups the tape objects into cloud objects Cloud objects will be transmitted asynchronously to a cloud provider IBM i will leverage BRMS to manage save process from virtual tape to public cloud
41
Cloud Storage Solutions offering price/licensing
PID: 5733 ICC Feature 1 Feature 2 Feature 3 Data transfer unlimited transfers Advanced function One time charge $2400/single VM $5000/unlimited VM $TBD U.S. prices subjected to change at any time Priced per VM (partition) Cloud storage: Amazon, Softlayer, or BP cloud providers supporting SWIFT or S3 Softlayer storage $40/month/Tbyte for storage and $90/month/Tbyte for downloading BRMS turn-key for backup/recovery automation BRMS will auto configure and set up storage backup profile Initial Announce 10/11/16, GA 11/16/2016 V7.1 and above developrworks wiki BRMS has made this as simple as possible to use (“Turn-key”). BRMS will automatically create media class, storage location, move policy, control groups, etc.
42
Considerations for Cloud Storage Solutions for i
The initial Cloud Storage Solutions offering is English language only, consider this as the beta version A key consideration will be your data volume/transfer time requirements which will dictate the bandwidth that you will need. Consider creating an off line backup copy, for example a flash copy image that can be uploaded to the cloud over a longer period of time and can be kept around for a local backup copy Initial use cases will range from archiving to actual backup recovery operations, future options to enable drag and drop file sharing is planned as well further cloud provider options Individual customers or business partners looking to provide cloud storage solutions should be interested in this technology Contact for the complete chart deck
43
Notes to speaker… Slides following this one should be kept and quickly reviewed. Please remove this slide The reference links slide should be filled out to enable our attendees to quickly find links that they may need. IF you have long/messy URLs feel free to use a URL shortener and customizer such as IBM Snip -
44
Additional resources for PowerHA IBM i
PowerHA Wiki Live Demo PowerHA with geomirroring live demo Pargon customer video PowerHA demo Lab Services PowerCare Redbooks at NEW REDBOOKS We’re cleaning house and bringing out a new series
45
GDR reference material
GDR product page: ibm.biz/PowerGDR GDR Intro charts: Quick Intro to GDR: GDR T3 material (Charts, recordings): GDR social forum: 5 Things to know about GDR: GDR Redbook:
46
GDR additional information & support
GDR Sales Essentials – Sales Education – ntra&offering=gts5&action=load&itemCode=ltu56743&curriculum=&category=&lecture_language= GDR Pricing & ordering – 80a1-4dcc-a3c8-ed1815f46d48 Contacts: Dave Clitherow /UK/IBM – GTS Global Offering Mgr. Vinay Kumar– GTS Global Offering Mgr. . Bangalor India Functional ID if you need additional assistance - GDR License is fufiled via PRPQ through the AAS ordering system. The PRPQ (Program Request for Price Quote) is in place to ensure implementation services are ordered with each customer’s first license. Implementation Services are ordered via GTS standard BMS/CFTS Contract management System. In order to help guide you through this process we have created a one stop web site (IBM Connections Community wiki page below. It is important to note that in order to streamline the order process, that you involve the GDR team presales for help with pricing the solution.
47
Session summary PowerHA for IBM i the IBM Cognitive Systems strategic solution for high availability and disaster recovery based on shared storage clustering GDR the IBM Cognitive solution for disaster recovery, easy to use, low cost solution based on VM restart Cloud Storage Solutions for IBM i, strategic solution for backup recovery and archiving to Softlayer or Amazon cloud
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.