Presentation is loading. Please wait.

Presentation is loading. Please wait.

sBC09 - Business Resiliency Solutions with the DS8870

Similar presentations


Presentation on theme: "sBC09 - Business Resiliency Solutions with the DS8870"— Presentation transcript:

1 sBC09 - Business Resiliency Solutions with the DS8870
Charlie Burger – Certified I/T Storage Specialist 19 May 2014 sBC09 - Business Resiliency Solutions with the DS8870

2 Technical Product Deep Dives - Accelerate Storage Webinars
The Free IBM Storage Technical Webinar Series Continues in IBM Technical Experts cover a variety of Storage topics Audience: Clients who are either currently have IBM Storage products or considering acquiring IBM Storage products. Business Partners and IBMers are also welcome. How to sign up? To automatically receive announcements of the Accelerate with ATS Storage webinar series, Clients, Business Partners or IBMers can send an to Information, schedules, and archives: Located in the Accelerate Blog Upcoming webinars: May 28th 12:00 ET/9:00PT Accelerate with ATS: OpenStack - Storage in the Open Cloud Ecosystem June 26th12:00 ET/9:00 PT Accelerate with ATS: DS8000 Announcement Update Aug 21st 12:00 ET/9:00 PT Accelerate with ATS: XIV Announcement Update

3 ATS DS8870 Advanced Functions
Customer Workshop In this customer seminar, we'll discuss real world data replication and business continuity scenarios, including live demonstrations of market leading IBM storage management tools. This 2 day class introduces IBM System Storage FlashCopy®, Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror and z/OS Global Mirror concepts, usage and benefits. Included is a discussion on the benefits and usage of IBM System Storage Easy Tier, an exciting solution for cost effective implementation of Solid State Drives (SSDs). Advanced features of the DS8870 such as high performance FICON, parallel access volumes, and HyperSwap will also be discussed. The purpose of this Client Seminar is to assist IBM and Business Partner teams progress DS8870 opportunities closer to wins by having experts highlight the technical advantages of the business continuity and other advanced functions of the DS8870. If your client has older DS8870 equipment, competitively installed disk or even perhaps a footprint or two of DS8870s, with the opportunity for additional DS8870s or to add advanced features, SSDs to their DS8870s, consider nominating them for this Seminar. Seating is limited to 37 attendees max. Nominate your clients here: Workshop client nomination questions contact: Elizabeth Palmer - Workshop content questions contact: Craig Gordon

4 Agenda Remote Mirroring HyperSwap FlashCopy Copy Services Management
Additional reference material 4

5 IBM DS8870 copy services terminology
FlashCopy Point in time copy Metro Mirror Synchronous mirroring Global Mirror Asynchronous mirroring Metro / Global Mirror Three site synchronous & asynchronous mirroring Within the same Storage System Out of Region Site C Primary Site A Metro distance Site B Primary Site A Out of Region Site B Primary Site A Metro Site B

6 DS8870 Remote Mirroring Options
Business Resiliency Solutions with the DS8870 DS8870 Remote Mirroring Options

7 DS8870 Remote Mirroring options
Metro Mirror (MM) – Synchronous Mirroring Synchronous mirroring with consistency at remote site RPO of 0 Global Copy (part of MM and GM) – Asynchronous Mirroring Asynchronous mirroring without consistency at remote site Consistency manually created by user RPO determined by how often user is willing to create consistent data at the remote Global Mirror (GM) – Asynchronous Mirroring Asynchronous mirroring with consistency at the remote site RPO between 3-5 seconds Metro/Global Mirror – Synchronous / Asynchronous Mirroring Three site mirroring solution using Metro Mirror between site 1 and site 2 and Global Mirror between site 2 and site 3 Consistency maintained at sites 2 and 3 RPO at site 2 near 0 RPO at site 3 near 0 if site 1 is lost RPO at site 3 between 3-5 seconds if site 2 is lost Primary Secondary Secondary Primary Secondary Primary Journal Primary Secondary Secondary2

8 DS8870 Synchronous Mirroring
Business Resiliency Solutions with the DS8870 DS8870 Synchronous Mirroring

9 What is DS8870 Metro Mirror? S P
Metro Distance S P Local Site Remote Site Storage hardware based synchronous mirroring solution Application independent Some performance impact to write I/O (e.g. 1ms per 100kms) Utilizes pre-deposit write capability to minimize protocol overhead Established at a volume level A 2 site, 2 volume solution Metro distance Up to 303KM without an RPQ Provides an RPO of 0 Requires automation such as TPC-R or GDPS to ensure consistency Enables HyperSwap for z/OS and AIX Disaster protection for all IBM supported platforms Other potential uses: Data migration/movement between devices Data workload migration to alternate site

10 DS8870 Metro Mirror normal operation
Synchronous mirroring with data consistency Can provide an RPO of 0 Application response time affected by remote mirroring distance Leverage pre-deposit write to provide single round trip communication Metro Distance (up to 303 KM standard) Application Server 1 Write to local Primary sends Write IO to the Secondary (cache to cache transfer) Secondary responds to the Primary Write completed Primary acknowledges Write complete to application 4 S P 2 3 Remote DS8870 Local DS8870

11 Typical DS8870 Metro Mirror configurations
Metro Mirror within a single DS8870 Fibrechannel ‘loopback’ Typically used only for testing Metro Mirror between 2 DS8870s in the same physical location Provides high or continuous availability Clustering Protection from hardware failure Leverage HyperSwap Also may be used for planned outages Maintenance Metro Mirror between 2 DS8870s in a metro region Protects against local datacenter disaster 303 KM standard support Additional distance via RPQ 11

12 DS8870 Metro Mirror supported configurations
One local system to multiple remote systems ‘Fan out’ Usage examples - Smaller systems at remote site Multiple active sites with different amounts of capacity available Multiple local systems to one remote system ‘Fan in’ – performance analysis recommended Usage example – subset of capacity on multiple primary systems is replicated to a single remote system Simultaneous bi-directional between systems Different volumes Usage example – 2 active sites A DS8870 may contain source and target volumes for multiple Metro Mirror consistency groups 12

13 What is Pre-deposit Write on the DS8870?
IBM Solution - High Performance and distance Very high Fibre Channel remote link throughput due to minimized protocol exchanges, and large HBA buffers Remote accepts up to 64KB of data without pre-authorization DS8870 fills the transfer pipe with many concurrent transfers. Essentially, the primary DS8870 says "I trust you have enough buffer space to fit the data I am sending you so I won't even wait for an acknowledgement before I send more". Other vendors have a much smaller limit on the number of concurrent IOs they allow on the pipe and they have to wait until they receive the acknowledgement before sending more. Send: 64KB OK etc…. Instead of: IBM Solution - Results in superior performance and distance Supported standard distance is a very long 303 KM DS8870 Metro Mirror, Global Mirror and Global Copy all use same optimized protocol DS8870 value = better performance; lower metro mirror overhead = reduced batch processing times May I send 64KB? Latency calculation example: Site A and B are 100 km apart. IBM FC Remote Mirror write: 1 round-trip protocol exchange / data write. (IBM Pre-deposit Write) Hence: 1 round-trip * 2 exchanges / round-trip * 100 km / exchange * 5 µs/km = 1000 µs = 1 ms. Add: 0.5 ms. for other network delays Rule of Thumb for IBM links: 1.0 – 1.5 ms. Latency / 100 km. round trip Bandwidth and Latency Definitions Bandwidth: how much data can be transmitted per time. Measured in MB/sec. or GB/sec. or Gb/sec. Latency: the additional time required to transmit a signal from the Primary to the Secondary due to, in part, the finite speed of light. The speed of light in optical fiber is approximately 2/3 the speed of light in a vacuum, i.e.: 2/3 x 3 x 108 m/s. = 2 x 108 m/s. or 5 µs/km. Network routers and other network components can add additional latency and limit bandwidth.  RoT add: 1.0 – 1.5 ms. Latency / 100 km round trip. Yes, send Send: 64KB OK etc…. 13

14 Business Resiliency Solutions
with the DS8870 DS8870 Global Copy

15 What is DS8870 Global Copy? S P
Storage hardware based asynchronous mirroring solution Part of Metro Mirror and Global Mirror licenses Does not inherently provide data consistency Writes can be out of sequence on secondary disk Can develop procedures to create a point in time consistency Application independent Minimal performance impact to write I/O Established at a volume level A 2 site, 2 volume solution Global (unlimited) distance Potential uses: Data migration/movement between devices Data workload migration to alternate site Global Copy Global Distances Local Site Remote Site S P 15

16 DS8870 Global Copy normal operation
Application Server Asynchronous Minimal application impact Efficient use of bandwidth ‘Fuzzy’ data -- requires additional procedures to create consistency Global Distance 1 3 6 2 Write to local Track ID added to checklist of tracks to be copied to secondary Write complete to application At a later time, write copied to the remote Write complete sent from remote to local Track ID removed from checklist 4 Global Copy 5 Local DS8870 Remote DS8870

17 Typical DS8870 Global Copy configurations
Global Copy between 2 subsystems at global distances Migration to new datacenter Protection against regional disaster User creates consistency at remote site Global Copy between 2 subsystems in a metro region Migration to new or additional datacenter Protection against local datacenter disaster Global Copy between 2 subsystems in the same physical location Hardware migration Protection against hardware subsystem failure User creates consistency 17

18 Business Resiliency Solutions
with the DS8870 DS8870 Global Mirror

19 What is DS8870 Global Mirror?
Storage hardware asynchronous mirroring solution with data consistency Minimal impact to the production write I/O Unlimited global distances Reduced (less than peak) network bandwidth requirements Peer-to-peer data copy mechanism is Global Copy Consistency Group formation mechanism is FlashCopy Application independent Multiple sessions supported to meet different RPO requirements A 2 site Disaster Recovery replication solution Integrated 3 volume solution A (local)  B (remote)  C (journal – can be thinly provisioned) Or 4 copies (D copy for testing without impacting active mirroring) Low Recovery Point Objective (RPO) Single digit seconds (typically 3-5 seconds) Scalable Single consistency group can include System z, open systems, IBM i Consistency maintained across multiple subsystems Up to a 16 physical subsystems in any combination Global Mirror Global Distances Local Site Remote Site P S J

20 DS8870 Global Mirror normal operation
Asynchronous mirroring with data consistency RPO of 3-5 seconds realistic Minimizes application impact Uses bandwidth efficiently RPO/currency depends on workload, bandwidth and requirements Global Distance Application Server 1 Write to local Write complete to application Autonomically or on a user-specified interval, consistency group formed on local CG sent to remote via Global Copy (drain) If writes come in to local, IDs of tracks with changes are recorded After all consistent data for CG is received at remote, FlashCopy with 2-phase commit Consistency complete to local Tracks with changes (after CG) are copied to remote via Global Copy, and FlashCopy Copy-on-Write preserves consistent image 2 6 3 4 (CG only) 5 7 (changes after CG) Flash Copy Global Copy Remote DS8870 Local DS8870

21 DS8870 Supports multiple GM sessions and RPOs (R5.1)
Session A Session B Session C Master Global Mirror Backup Servers Remote Site Local Site Independent Recovery Point Objectives (RPO) Session A has consistency formation interval of 0 (autonomic) Session B has consistency formation interval of 30 minutes Session C has consistency formation interval of 60 minutes 21

22 Global Mirror configurations
One local system to multiple remote systems ‘Fan out’ Usage examples - Smaller systems at remote site Multiple active sites with different amounts of capacity available Multiple local systems to one remote system ‘Fan in’ - performance analysis recommended Usage example – subset of capacity on multiple primary systems is replicated to a single remote system Simultaneous bi-directional between systems Different volumes Usage example – 2 active sites

23 Creating a test copy with Global Mirror pause
Pause Global Mirror on consistency group boundary Resynchronise Global Copy pairs and Resume Global Mirror B F C Global Mirror Establish FlashCopy to create copy for testing A RPQ available with R6.3 and part of R7.1 Pause Global Mirror on a consistency group boundary. Suspend pairs and wait until GC reaches 100% duplex, then create the FlashCopy, resync GC and resume GM Not only is the scenario simpler, but there is a performance improvement, since there is no waiting for the Fast Reverse Restore to complete. GDPS 3.10 and TPC-R support this new capability RPQ capability started R6.3 Note: Both GDPS and TPC-R support this enhanced process

24 When to use DS8870 Global Mirror?
Distance exceeds maximum limits for synchronous data transfer Limited impact to application write I/O operations at the primary location Asynchronous data transfer Consistency maintained across multiple subsystems Solution for System z, x, p, and IBM i RPO can be greater than 0 but still needs to be very current In the single digit second range (3-5) P S J

25 DS8870 Metro/Global Mirror
Business Resiliency Solutions with the DS8870 DS8870 Metro/Global Mirror

26 What is DS8870 Metro/Global Mirror?
Metro Mirror Metro Distance Global Mirror Global Distance Local Site Intermediate Site Remote Site 3-site, storage hardware synchronous and asynchronous solution with data consistency Synchronous (Metro Mirror) + Asynchronous (Global Mirror) Continuous + near-continuous replication Cascaded implementation Incremental resynchronization between Local and Remote in the event of Intermediate failure 4 volume design (Global Mirror FlashCopy target may be Space Efficient) Metro Distance (Local ->Intermediate) + Global Distance (Intermediate -> Remote) RPO of 0 at intermediate or remote for local failure RPO as low as 3-5 seconds at remote for failure of both local and intermediate Application response time impacted only by distance between local and intermediate Fast resynchronization of sites after failures and recoveries Single consistency group may include open systems, System z and IBM i volumes

27 DS8870 Metro/Global Mirror normal operation
Application Server Write to local DS8870 Copy to intermediate DS8870 (Metro Mirror) Copy complete to local from intermediate Write complete from local to application On user-specified interval or autonomically (asynchronously) Global Mirror consistency group formed on intermediate, sent to remote, and committed on FlashCopies GM consistency complete from remote to intermediate GM consistency complete from intermediate to local (allows for incremental resynch from local to remote) 4 1 5 2 3 7 6 Local DS8870 Intermediate DS8870 Remote DS8870 27 27

28 Typical DS8870 Metro/Global Mirror configurations
Metro Mirror Same Site Global Mirror Global Distances Metro Mirror within a single location plus Global Mirror long distance Local high availability plus regional disaster protection Cascaded implementation Incremental Resync of Global Mirror from Metro Mirror source to Global Mirror target 2-site Metro Mirror within a metro region plus Global Mirror long distance Local high availability or local disaster protection plus regional disaster protection 3-site Metro Mirror Metro Distances Global Mirror Global Distances 28

29 4-site topology with Metro/Global Mirror (M/GM)
Region B Region A Site1 Site1 Global Mirror Metro Mirror Global Copy Site2 Site2 Global Copy in secondary site converted to Metro Mirror in case of disaster or planned site switch

30 DS8870 z/OS Global Mirror (XRC)
Business Resiliency Solutions with the DS8870 DS8870 z/OS Global Mirror (XRC)

31 What is z/OS Global Mirror (aka XRC)?
Global Distances Local Site Remote Site A 2 site asynchronous mirroring solution with data consistency utilizing System z resources Non-proprietary solution Can also be used for just data migration with SESSION(MIGRATE) Asynchronous data transfer Timestamp based Minimal impact to the production write I/Os Unlimited distance Reduced (less than peak bandwidth) network bandwidth requirements Managed by System Data Mover (SDM) Data moved by System Data Mover (SDM) address space(s) running on z/OS Supports heterogeneous disk subsystems Low Recovery Point Objective (RPO) Single digit seconds (3-5 seconds) Supports z/OS, z/VM and Linux for System z data

32 z/OS Global Mirror (XRC) normal operation
Write time stamped by z/OS server Write to local (timestamp retained) Write stored in cache on local (with timestamp) Write complete to application SDM reads updates to server (with timestamp) SDM forms Consistency Groups (CG) using timestamps SDM writes to Journal data set on remote SDM writes to secondary volumes SDM updates control dataset that copy is complete Optionally an additional FlashCopy may be created from the secondary System Data Mover (SDM) 6 z/OS Global Mirror 7 8 5 3 2 2nd 1 Jrnl 4 Application Server Local Remote 32

33 Typical DS8870 Metro/zGlobal Mirror configurations
Metro Mirror within a single location plus zGlobal Mirror long distance Local high availability plus regional disaster protection Multi target implementation Incremental Resync of z/OS Global Mirror from Metro Mirror target to Global Mirror target 2-site Metro Mirror within a metro region plus zGlobal Mirror long distance Local high availability or local disaster protection plus regional disaster protection 3-site Metro Mirror Same Site zGlobal Mirror Global Distances Metro Mirror Metro Distances zGlobal Mirror Global Distances 33

34 When to use z/OS Global Mirror?
Global Distances Local Site Remote Site Very large replication environment Consistency maintained across multiple subsystems Limited impact to application write I/O operations at the primary location Asynchronous data transfer RPO can be greater than 0 but still needs to be very current In the single digit second range (3-5) Distance exceeds maximum limits for synchronous data transfer 300KM for fibre links Solution for z/OS, Linux for System z and z/VM

35 4-site topology for with Metro Mirror / z/OS Global Mirror (MzGM)
Region A Region B Site1 Site1 z/OS Global Mirror PROD SDM Metro Mirror Metro Mirror Site2 Site2

36 Business Resiliency Solutions
with the DS8870 HyperSwap

37 System z HyperSwap technology
HyperSwap substitutes Metro Mirror secondary for Metro Mirror primary device transparently Can swap large number of devices - fast Changes status in secondary disk subsystem Transparent to applications - applications continue to use same UCBs, switches to secondary device number Planned swaps of complete disk configuration Unplanned swaps of complete disk configuration for various primary disk subsystem failures application UCB UCB Planned HyperSwap Suspend - Maintain Change Recording bitmap Resynch - Maintain Metro Mirror Unplanned HyperSwap Mirror is suspended and Changed Recording bitmap is created Metro Mirror P S S P GDPS/PPRC HyperSwap notes: The GDPS/PPRC HyperSwap function is designed to broaden the continuous availability attributes of GDPS/PPRC by extending the Parallel Sysplex redundancy to disk subsystems. Planned HyperSwap function provides the ability to: • Transparently switch all primary PPRC disk subsystems with the secondary PPRC disk subsystems for a planned reconfiguration • Perform disk configuration maintenance and planned site maintenance without requiring any applications to be quiesced. Planned HyperSwap function became generally available December, 2002. Unplanned HyperSwap function contains additional function to transparently switch to use secondary PPRC disk subsystems in the event of unplanned outages of the primary PPRC disk subsystems or a failure of the site containing the primary PPRC disk subsystems. Unplanned HyperSwap support allows: • Production systems to remain active during a disk subsystem failure. Disk subsystem failures will no longer constitute a single point of failure for an entire Parallel Sysplex. • Production servers to remain active during a failure of the site containing the primary PPRC disk subsystems if applications are cloned and exploiting data sharing across the two sites. Even though the workload in the second site will need to be restarted, an improvement in the Recovery Time Objective (RTO) will be accomplished. Unplanned HyperSwap function became generally available February, 2004. Site failover: Prod systems can stay active. Today, CF data is not consistent, so we need to restart DB2. We are moving in the direction to allow you to not need to restart DB2. RTO is minutes. Planned reconfigs: site maintenance. Can be done w/o app impact. If have MultiSite workload (single site wl -- need to bring it up on site2) App pauses 70 seconds while we switch UCB addresses but does not go down. Brings different technologies together to provide a comprehensive application and data availability solution

38 PowerHA - AIX HyperSwap Technology
HyperSwap non-disruptively substitutes secondary for primary devices for planned and unplanned events through in-band commands Transparent to applications - continue to use same the same device Customer Benefits Unplanned HyperSwap Continuous availability against storage failures Planned HyperSwap Storage migrations without downtime Storage maintenance without downtime Requirements AIX 6.1 TL8 with SP1 or AIX 7.1 TL2 with SP1 PowerHA SP1 with APAR IV2758 Available November 9, 2012 DS8700/DS8800 R6.3 SP2, DS8870 R7.1+ HA/DR Application Cluster HyperSwap Customer Benefits Oct 3, 2012 PowerHA HyperSwap announcement letter HyperSwap A PowerHA SystemMirror Enterprise Edition and DS8800 storage multisite configuration is enabled that cross connects two sites in a linked or stretched cluster topology that enables the applications to continue through either a planned or unplanned storage server outage. If one of the storage servers goes offline, the other storage server continues storage operations with minimal disruption to the application environment. The DS8800 storage devices are coupled through Metro Mirror replication, and the production nodes are configured through either a stretched cluster configuration with a single repository and a multicast network or through two independent repositories in a linked cluster configuration with a unicast communication network. Synchronous Mirroring Primary DS8000 Site 1 Secondary DS8000 Site 2 Legend: Brings together AIX, PowerHA and DS8000 to provide a comprehensive application and data availability solution Active Path Passive Path

39 Full System Copy: Hyper-Swap
IBM i 7.2 – Power HA Express Edition Provides ability to switch access from production to remote DS8000 Automatic or manually triggered HA solution: planned storage outages DS8000 only IASP-based replication not yet supported

40 Business Resiliency Solutions
with the DS8870 FlashCopy

41 How does FlashCopy work?
A relationship between source and target volumes is set up, including a checklist of track IDs which will be used to show which tracks for the Point-in-Time image have been physically copied from source to target. When the FlashCopy relationship has been ‘established’, the target is immediately accessible for read and write. Physical copying of tracks from source to target occurs as needed (‘copy on write’) or sequentially as a background task. Target volume is offline during establish System z DFSMSdss is exception Target FlashCopy Checklist of track IDs Source

42 Available options for FlashCopy
Full volume FlashCopy Background Copy or Nocopy Persistent FlashCopy Incremental FlashCopy Inband Remote Pair FlashCopy Fast Reverse Restore Transition FlashCopy Nocopy to Copy Consistency Group FlashCopy Multiple relationships Data Set FlashCopy z/OS and z/VSE only Flash Copy Functions can be combined…. Example: Inband, Incremental, Consistency Group

43 Unique Feature - Preserve Mirror or Remote Pair FlashCopy
Metro Mirror + logical Metro Mirror of FlashCopy Metro Mirror + FlashCopy + Inband FlashCopy Send FlashCopy command Inband to avoid any data movement across the mirroring link between the 2 FlashCopy targets Mirror remains ‘full duplex’ (synchronized) during FlashCopy background copies at both sites Maintains HyperSwap being active Reduced bandwidth as no data movement required Usage Examples Continuous mirroring + periodic FlashCopy of production volume and Metro Mirror target Local backup + remote backup Continuous dataset FlashCopy Local Storage Server Remote Storage Server full duplex Local A Remote A FlashCopy FlashCopy Metro Mirror FlashCopy full duplex Local B Remote B

44 Copy Services Management
Business Resiliency Solutions with the DS8870 Copy Services Management

45 There are multiple GDPS solutions to meet various customer requirements for Availability and Disaster Recovery Continuous Availability of Data within a Data Center Single Data Center Applications remain active Continuous access to data in the event of a storage outage GDPS/PPRC HM RPO=0 [RTO secs] for disk only Continuous Availability with DR within Metropolitan Region GDPS/PPRC RPO=0 RTO mins / RTO<1h (<20km) (>20km) Two Data Centers Systems remain active Multi-site workloads can withstand site and/or storage failures Disaster Recovery Extended Distance Two Data Centers Rapid Systems D/R w/ “seconds” of data loss Disaster Recovery for out of region interruptions GDPS/GM & GDPS/XRC RPO secs, RTO<1h CA Regionally and Disaster Recovery Extended Distance Three Data Centers High availability for site disasters Disaster recovery for regional disasters GDPS/MGM & GDPS/MzGM RPO=0,RTO mins/<1h & RPO secs, RTO<1h CA, DR, & Cross-site Workload Balancing Extended Distance Two or more Active Data Centers Automatic workload switch in seconds; seconds of data loss GDPS/Active-Active RPO secs, RTO secs As part of the February announcement we are previewing a new version of the GDPS family of offerings. This announcement includes new capabilities that deliver significant value to the business. Some of the new capabilities include: End to end disaster recovery automated solution. Automated recovery removes people as Single Point of Failure. Single point of control for heterogeneous data across enterprise These capabilities deliver significant new value, by delivering new levels of business resiliency, while providing improved economics, new workload capabilities and improved Qualities of Service all providing a foundation for responsible business computing. RPO – recovery point objective RTO – recovery time objective

46 What is TPC for Replication?
Volume level Copy Service Management Manages Data Consistency across a set of volumes with logical dependencies Supports multiple devices (ESS, DS6000, DS8870, XIV, SVC, Storwize Family) Coordinates Copy Service Functionalities Flash Copy Metro Mirror Global Mirror Metro Global Mirror Ease of Use Single common point of control Web browser based GUI and CLI Persistent Store Data Base Source / Target volume matching SNMP Alerts Wizard based configuration Business Continuity Site Awareness High Availability Configuration – active and standby management server No Single point of Failure Disaster Recovery Testing Disaster Recovery Management

47 zCDP for DB2 and IMS – Eliminate backup windows
DB2 and IMS System Level Backup and System Level Restore - Backup calls HSM with DB Tables, HSM FlashCopy to SMS Copy Pool, then DB Logs. - DB2 and IMS Maintain Cross Volume Data Consistency. No Quiesce of DB required. DFSMShsm function that manages Point-in-Time copies Combined with DB2 BACKUP SYSTEM, provides non-disruptive backup and recovery to any point in time for DB2 and IMS databases and subsystems (SAP) Copy Pool Application CP Backup Storage Group FlashCopy Onsite Offsite Dump to Tape Multiple Disk Copies Up to 5 copies and 50 Versions of DB2 and IMS Image Copies, managed by Management Class Automatic Expiration With DB2 9 and IMS 10, DB2 for example provides a System Level Backup and System Level Restore Utilities that can eliminate the need for backup windows. SLB can now call DFSMShsm Fast Replication with the DB2 tables to be backed up. HSM under the covers does a FlashCopy of the tables on the DS8K to an SMS Copy Pool. SLB then calls HSM again with the DB2 Logs, which HSM FlashCopies to an SMS Copy Pool. HSM will manage via Management Class up to 50 versions of backups and on the restore side be able to restore an entire DB2 table or an entire DB2 subsystem to any PiT covered by the 50 versions of backup. All of this is done with no interruption to DB2, no stopping Write I/Os or Quiescing DB2 etc. DB2 SLB manages the data consistency of the backups. IMS 10 supports a similar function. So, again, the middleware DB2/IMS, working with z/OS (DFSMShsm) and the DS8K FlashCopy to provide a solution to eliminate backup windows. Logical backups of data are still very important to insure recoverability from Logical data errors that can occur. Storage based replication just moves change records/blocks and if logical data corruption occurs, storage based data replication will just propagate the logically corrupted data to all copies very quickly… Recovery at all levels from either disk or tape! Entire copy pool, individual volumes and … Individual data sets

48 Summary Remote Mirroring HyperSwap FlashCopy Copy Services Management
Additional reference material 48

49 Additional Information
Business Resiliency Solutions with the DS8870 Additional Information

50 References TechDocs White Paper: IBM Handbook on Using DS8870 Data Replication for Data Migration - TechDocs White Paper: IBM z/OS Multi-Site Business Continuity TechDocs White Paper: IBM DS8800 Data Consolidation TechDocs White Paper: IBM HyperSwap Options TechDocs White Paper: IBM System z and DS8870 z/OS Synergy Techdocs White Paper: IBM z/OS Data Corruption Trends & Directions Redpaper: IBM Storage Infrastructure for Business Continuity Redpaper: IBM System Storage DS8870 Easy Tier

51 DS8870 Copy Services Licensing Summary
Site A Site B Site C 2 Site - Metro Mirror GDPS/PPRC GDPS/HM RCMF/PPRC TPC for Replication Required MM f/c 750x/751x Optional PTC f/c 72xx Required MM f/c 750x/751x Recommended N/A 2 Site - Global Mirror GDPS/GM Required GM f/c 752x/753x PTC f/c 72xx (required if return home using GM) Required GM f/c 752x/753x SEFLC f/c 73xx or PTC f/c 72xx (PTC recommended for D copy if SEFLC used for C) 2 Site - Global Copy Required MM or GM f/c 75xx Note: MM required for Go to Sync capability MM or GM f/c 75xx Recommended 3 Site - Metro / Global Mirror GDPS/MGM MGM f/c 74xx GM f/c 752x/753x Required GM f/c 752x/753x SEFLC f/c 73xx or PTC f/c 72xx (PTC recommended for E copy if SEFLC used for D 2 Site - zGlobal Mirror (XRC) GDPS/XRC RCMF/XRC zGM f/c (#76xx) None zGM f/c (#76xx) required if return home with zGM 3 Site - Metro / zGlobal Mirror GDPS/MzGM MM f/c 750x/751x zGM Resync f/c 76xx

52 Thank You


Download ppt "sBC09 - Business Resiliency Solutions with the DS8870"

Similar presentations


Ads by Google