Presentation is loading. Please wait.

Presentation is loading. Please wait.

Replicating Microsoft Applications using EMC RecoverPoint

Similar presentations


Presentation on theme: "Replicating Microsoft Applications using EMC RecoverPoint"— Presentation transcript:

1 Replicating Microsoft Applications using EMC RecoverPoint
by Sharon Almog RecoverPoint Eng. & James Baldwin Strategic Solutions Microsoft mSpecialist Summit 2011 – Day #3

2 Agenda Introduction to RecoverPoint Data Protection Roadmap
Title Month Year Agenda Introduction to RecoverPoint Data Protection Roadmap RecoverPoint Cluster Enabler (RP/CE) Replicating HyperV applications Replicating Microsoft SQL Server EMC RecoverPoint and SQL 2012 “AlwaysOn” 5min 5 min 20 min (with 10min movie) 10 min (with Denali 5min)

3 An introduction to RecoverPoint

4 Bi-directional Local and Remote Replication
PRODUCTION SITE DISASTER RECOVERY SITE RecoverPoint bi-directional replication/recovery Application servers Standby servers Local copy Remote copy RecoverPoint appliance RecoverPoint appliance RecoverPoint is a comprehensive data protection solution that provides integrated CRR and CDP, with bi-directional synchronous and asynchronous replication and with concurrent local and remote (CLR) data protection. RecoverPoint allows users to recover applications remotely to any point in time without impact to the production application or to ongoing replication. RecoverPoint CRR complements EMC’s existing portfolio of remote replication products by adding heterogeneous replication (with bandwidth efficiencies) in synchronous or asynchronous replication environments, which lowers your multi-year total cost of ownership. RecoverPoint CDP—offered as a stand-alone solution, or combined with CRR—enables you to roll back to any point in time for effective local recovery from events such as database corruption. RecoverPoint ships with utility agents that support the Microsoft VSS and VDI APIs, enabling intelligent local and remote replication for Exchange Server and SQL Server. Other applications, such as Oracle, can be replicated using scripting to the RecoverPoint command-line interface. Unlike other replication products, RecoverPoint is appliance-based, which enables it to better support large amounts of information stored across a heterogeneous server and storage environment. RecoverPoint uses lightweight splitting technology—on the application server, in the fabric, or in the Symmetrix VMAXe, VNX series, CLARiiON CX4, or CX3 array—to mirror a write to a RecoverPoint appliance that resides outside of the primary data path. Implementing this approach enables RecoverPoint to deliver continuous replication without impacting an application’s I/O operations. If an array-based write splitter is used on the VNX series or CLARiiON, then the iSCSI LUNs hosted by the VNX series or CLARiiON can be used by RecoverPoint. EMC’s network-based approach to data protection enables certain key functionalities, such as being able to instantly recover data to any point in time, and by leveraging a journal that stores all data changes and bookmarks that identify application-specific events. RecoverPoint makes use of consistency groups, which let users define sets of volumes for which the inter-write order must be retained during replication and recovery. This ensures that data at any point in time will be fully self- consistent. Finally, the RecoverPoint appliance provides advanced bandwidth management, which implements a policy engine that decides how much data should be compressed before sending it across an IP or Fibre Channel link. RecoverPoint supports major operating systems (including IBM AIX, HP-UX, Linux, OpenVMS, Oracle Solaris, Microsoft Windows, and VMware ESX), supports storage arrays from EMC and other vendors, and can be easily managed from a graphical user interface (GUI) or a command-line interface (CLI). Finally, other EMC products, such as EMC Data Protection Advisor, EMC NetWorker, and EMC Replication Manager provide advanced functionality through integration with RecoverPoint. Production and local journals Fibre Channel/WAN Prod LUNs Remote journal SAN SAN Storage arrays Storage arrays Host-based write splitter Fabric-based write splitter Symmetrix VMAXe, VNX-, and CLARiiON-based write splitter

5 Local Protection Process: Continuous Data Protection (CDP)
2a. Host write splitter 1. Data is split and sent to the RecoverPoint appliance in one of three ways 2b. Intelligent-fabric write splitter 3. Writes are acknowledged back from the RecoverPoint appliance This slide describes the data flow from the application host to the production volumes, and how the RecoverPoint appliance accesses the data as part of the CDP process. (Note to Presenter: Click now in Slide Show mode for animation.) An application server issues a write to a LUN that is being protected by RecoverPoint. This write is split, then sent to the RecoverPoint appliance in one of three ways: (Note to Presenter: Click now in Slide Show mode for animation.) The first way is through a host write splitter. This host write splitter resides in the I/O stack, residing below any file system and volume manager, and just above any multipath driver (such as EMC PowerPath). The write splitter looks at the destination for the write packet. If to a LUN that RecoverPoint protects, the write splitter will send a copy of the write packet to the RecoverPoint appliance. It does this by rewriting the target address inside the packet to redirect it to the RecoverPoint appliance’s pseudo-LUN, and reissuing the write down the stack. RecoverPoint/SE only supports a Windows-based host write splitter. (Note to Presenter: Click now in Slide Show mode for animation.) The second way is through an intelligent fabric switch-based write splitter, such as the Connectrix AP-7600B with the SAS APIs, or one of the Connectrix MDS-9000 series switches with the SANTap API. The switch will intercept all writes to LUNs being protected by RecoverPoint, and will send a copy of that write to the RecoverPoint appliance. This is not supported with RecoverPoint/SE. (Note to Presenter: Click now in Slide Show mode for animation.) The third, and most common way, is through an array-based write splitter, which is supported on VNX series arrays, CLARiiON CX3 arrays with FLARE 26 or higher patch code, and on CLARiiON CX4 arrays with FLARE 28 or higher. Using the array-based write splitter, the array intercepts all writes to LUNs being protected by RecoverPoint, and will send copies to the RecoverPoint appliance. In all cases, the original write travels though its normal path to the production LUN. ((Note to Presenter: Click now in Slide Show mode for animation.) When the copy of the write is received by the RecoverPoint appliance, it is acknowledged back (ACK). This ACK is received by the write splitter (either the host or fabric write splitter), and held until the ACK is received back from the production LUN. With both ACKs received, the ACK is sent back to the host, and I/O continues normally. (Click ) Once the appliance has acknowledged the write, it will move the data into the local journal volume, along with a time stamp and any application-, event-, or user-generated bookmarks for the write. (Note to Presenter: Click now in Slide Show mode for animation.) Once the data is safely in the journal, it is then distributed to the target replica volumes, with care taken to ensure that write order is preserved during this distribution. 4. The appliance writes data to the journal volume, along with time stamp and application-specific bookmarks 2c. CLARiiON write splitter / A / C / B r A r C r B Production volumes Replica volumes Journal volume 5. Write-order-consistent data is distributed to the replica volumes

6 Remote Protection Process: Continuous Remote Replication (CRR)
1. Data is split and sent to the RecoverPoint appliance in one of three ways 3. Writes are acknowledged back from the RecoverPoint appliance 2a. Host write splitter 6. Data is received, uncompressed, sequenced, and verified 7. Data is written to the journal volume 2b. Intelligent-fabric write splitter This slide describes the data flow from the application host to the production volumes, and how the RecoverPoint appliance accesses the data as part of the CRR process. (Note to Presenter: Click now in Slide Show mode for animation.) An application server issues a write to a LUN that is being protected by RecoverPoint. This write is split, then sent to the RecoverPoint appliance in one of three ways: (Note to Presenter: Click now in Slide Show mode for animation.) The first way is through a host write splitter. This host write splitter resides in the I/O stack, residing below any file system and volume manager, and just above any multipath driver (such as PowerPath). The write splitter looks at the destination for the write packet. If a LUN that RecoverPoint protects, the write splitter will send a copy of the write packet to the RecoverPoint appliance. It does this by rewriting the target address inside the packet to redirect it to the RecoverPoint appliance’s pseudo-LUN, and reissuing the write down the stack. RecoverPoint/SE only supports a Windows-based host write splitter. (Note to Presenter: Click now in Slide Show mode for animation.) The second way is through an intelligent fabric switch-based write splitter, such as the Connectrix AP-7600B with the SAS APIs or one of the Connectrix MDS-9000 series switches with the SANTap API. The switch will intercept all writes to LUNs being protected by RecoverPoint, and will send a copy of that write to the RecoverPoint appliance. This is not supported with RecoverPoint/SE. (Note to Presenter: Click now in Slide Show mode for animation.) The third, and most common way, is through an array-based write splitter, which is supported on VNX series arrays, CLARiiON CX3 arrays with FLARE 26 or higher patch code, and on CLARiiON CX4 arrays with FLARE 28 or higher. Using the array-based write splitter, the array intercepts all writes to LUNs being protected by RecoverPoint, and will send copies to the RecoverPoint appliance. In all cases, the original write travels though its normal path to the production LUN. (Note to Presenter: Click now in Slide Show mode for animation.) When the copy of the write is received by the RecoverPoint appliance, it is immediately acknowledged back (ACK) from the local RecoverPoint appliance, unless synchronous remote replication is in effect. If synchronous replication is in effect, the ACK is delayed until the write has been received at the remote site. Once the ACK is issued, it is processed by the write splitter (either the host or fabric write splitter), where it is held until the ACK is received back from the production LUN. With both ACKs received, the ACK is sent back to the host, and I/O continues normally. ((Note to Presenter: Read Step Number 4 on the slide, then click again for next step.) Once the appliance receives the write, it will bundle this write up with others into a package. Redundant blocks are eliminated from the package, and the remaining writes are sequenced and stored with their corresponding time stamp and bookmark information. The package is then compressed and deduplicated, and an MD-5 checksum is generated for the package. The package is then scheduled for delivery across the IP network to the remote appliance. (Note to Presenter: Click now in Slide Show mode for animation.) Once received there, the remote appliance verifies the checksum to ensure the package was not corrupted in the transmission. (Note to Presenter: Click now in Slide Show mode for animation.) The data is then uncompressed and written to the journal volume. (Note to Presenter: Click now in Slide Show mode for animation.) Once the data has been written to the journal volume, it is distributed to the remote volumes, ensuring that write-order sequence is preserved. 2c. CLARiiON write splitter 5. Data is sequenced, checked, deduplicated, compressed, and replicated to the remote appliances over IP or SAN 4. Appliance functions Fibre Channel-IP conversion Replication Data reduction and compression Monitoring and management / A / C / B r A r C r B Local site Remote site Journal volume 8. Consistent data is distributed to the remote volumes

7 Components and Topology
RecoverPoint appliances EMC LSI IBM HDS HP Layer 2 Fibre Channel SAN Heterogeneous storage Application servers Fibre Channel/WAN Application servers Accesses data that needs to be replicated Heterogeneous storage Source or target for replicated data RecoverPoint appliance (RPA) Connects to LAN and Fibre Channel (FC)/WAN for replication management and replicating data over FC/IP Connects to Layer 2 FC SAN for storage access RecoverPoint appliance cluster A group of inter-linked RPAs, working together closely, to provide replication services RPAs in a RecoverPoint cluster are called nodes WAN bandwidth management Reduce WAN bandwidth by up to 90% through deduplication, compression, and write-folding This diagram shows the key physical elements of RecoverPoint. The application servers access data that needs to be replicated and the heterogeneous storage is the source and target for replicated data. The RecoverPoint application software runs on an EMC-provided and -supported appliance that provides the core functionality and management for the system. The RecoverPoint appliance is built from a standard Dell 1μ high-availability server running a customized 64-bit Linux 2.6 kernel. Appliances are sold and deployed in a two- to eight-node cluster configuration per site. A RecoverPoint cluster enables active-active failover between the nodes. Each RecoverPoint appliance has four 8 Gb/s Fibre Channel ports that are used to attach into a high-availability dual-node (A/B) fabric. Each RecoverPoint appliance also has two Ethernet ports—one is connected over the LAN and is used as the management-control network and one can be used to communicate over the WAN to a remote RecoverPoint appliance cluster. The RecoverPoint appliance ships with a universal rail-kit that allows it to be installed in customer- supplied racks as well as in the EMC racks that are shipped with the VNX and CLARiiON storage arrays. The RecoverPoint software is designed to avoid the split-brain issues that can arise with traditional clustering technologies. All RecoverPoint appliances are in constant communication and use a shared private SAN volume to maintain the metadata state. If a RecoverPoint appliance fails, one of the other RecoverPoint appliances will take over without interrupting any in-progress replication or recovery operations. To increase performance and capacity, additional RecoverPoint appliances can be added nondisruptively to an operating RecoverPoint appliance cluster. Finally, the RecoverPoint software contains logic that measures and optimizes the bandwidth utilized for replication across a WAN, including technologies such as deduplication, compression, and write- folding. By incorporating these technologies, WAN consumption can be reduced by up to 90 percent over non-optimized replication.

8 RecoverPoint Family Details
RecoverPoint/CL Supports EMC and non-EMC storage environments Heterogeneous storage array support Multiple arrays per site, EMC and non-EMC Supports up to 300 TB of production data RecoverPoint/EX EMC storage environments Multiple EMC arrays per site Array-based splitting only Supports up to 300 TB of production data RecoverPoint/SE Supports VNX and CLARiiON environments Single VNX, CLARiiON, or Celerra unified array per site Supports Microsoft Windows host- based and VNX, CLARiiON array- based splitting only Supports 300 TB of production data Note to Presenter: In this deck the slides that are NOT applicable to all products in the RecoverPoint family. Those features not supported will have the statement “‘Not supported by RecoverPoint/…” If you do NOT see this statement, then the feature being described IS supported by RecoverPoint/CL, RecoverPoint/EX and RecoverPoint/SE. The RecoverPoint family consists of RecoverPoint and RecoverPoint/SE: RecoverPoint/CL is EMC’s leading out-of-band, block-level replication product for a heterogeneous server and storage environment. It supports EMC and non-EMC storage arrays and supports host-based, intelligent-fabric based, and array-based write splitters. RecoverPoint/EX is a version of RecoverPoint that is targeted for the Symmetrix VMAXe arrays as well as for the VNX series, Celerra unified, CLARiiON CX4 and CX3 series environments. RecoverPoint/EX only supports array-based write splitting, it does not support host-based write splitters or intelligent fabric-based write splitters. RecoverPoint/SE is a version of RecoverPoint that is targeted for VNX series, Celerra unified, CLARiiON CX4, CX3 UltraScale, and CX series environments. RecoverPoint/SE supports the local synchronous replication of up to 300 TB of data between LUNs that reside inside the same array. RecoverPoint/SE also supports the remote synchronous or asynchronous replication of up to 300 TB of data between LUNs that reside in one array at one site to one array residing at the other site. Finally, RecoverPoint/SE supports concurrent local and remote (CLR) replication of up to 300 TB of data for LUNs that reside in one array or between two arrays in two sites. All products support continuous data protection (CDP) and support local, remote or local and remote bi-directional synchronous and asynchronous replication between LUNs that reside in one or more arrays. RecoverPoint continuous remote replication (CRR) provides remote asynchronous replication between two sites for LUNs that reside in one or more arrays. RecoverPoint features bi-directional replication and a DVR-like any-point-in-time recovery capability, which allows the target LUNs to be rolled back to a previous point in time and used for read/write operations without affecting the ongoing replication or data protection. The bi-directional replication and DVR-like any-point-in-time recovery capability can be enabled simultaneously with RecoverPoint concurrent local and remote data protection. Common functionality One solution for data protection, replication, and disaster recovery Local replication for LUNs in one SAN Remote replication between LUNs in two SANs Concurrent local and remote data protection that combines local and remote replication Heterogeneous operating system support Disaster recovery for VNX series file systems Wizards automate production rebuild/restart DVR-like recovery and consolidation reduces RPO and RTO Application integration for Microsoft Exchange and SQL, other applications and databases Integrated consistency groups support federated servers and storage Synchronous or asynchronous local and/or remote replication with optional dynamic selection policies

9 Data Protection Roadmap

10 Data Protection Roadmap
Title Month Year Title Month Year Q1 2012 Q2 2012 Q3 2012 Q4 2012 RecoverPoint 3.5 VPLEX Splitter Integration VMAX Splitter Integration RP/SE Customer Installable UIM Integration RecoverPoint “E” GreenPlum integration Multi-Site Replication RecoverPoint scaling RecoverPoint Archway v1.0 VNX/Unisphere RecoverPoint MS Exchange (full differential) SQL System/Log VMware – VM level restores Archway V2.0 VMAXe, VPLEX with RecoverPoint Exchange ItemPoint recovery Hyper-V, SharePoint Archway RM 5.4.1 ItemPoint Exchange Single Mailbox Restore Oracle 11GR2 RM 5.4.2 Hyper-V VSS support Deeper TF and SRDF integration Read/Only mounting RM APM = Application Protection Manager also know by its code name Archway APM incorporates the functions of Replication Manager and Data Protection Advisor.The first major releases are focused on Microsoft Applications, but other applications will be added in future releases. RM and DPA are not at the end of their life. There are several use cases where APM and RM would co-exist. APM relies on RecoverPoint for data protection, it does not include its own replication technology. (Essentially at GA we will have the following positioning: For Microsoft Apps on VNX, lead with Archway.  For most everything else, lead with Replication Manager.  As we fill out the Archway roadmap, more of those use cases will migrate to Archway)  DPA v6.1 Avamar/Data Domain Integration UI refresh; performance/scaling CommVault 9 Performance, scalability, architecture changes Database migration, RA functionality DPA “Future” Customize dashboard/drill-downs Reporting on DataDomain Display of backup topology maps DPA 10 10

11 RecoverPoint integration with Cluster Enabler (RP/CE)

12 Microsoft Failover Cluster
A high-availability restart solution Node or resource failure automatically restarts failed nodes on another node where resources are available Note to Presenter: View in Slide Show mode for animation. Microsoft failover cluster is a networked grouping of Windows servers having sufficient redundancy of software and hardware so that a single point of failure will not significantly disrupt service. So in the event of a system application or network failure, resource failure, or load-balancing changes, a failover cluster can automatically transfer control of all failed system resources to other systems within the cluster, minimizing or eliminating application downtime by restarting the application on another node. Microsoft failover cluster is based on the shared-nothing clustering model. The shared-nothing model dictates that although several nodes in the cluster may have access to a device or resource, the resource is owned and managed by only one system at a time. Node fails Resource Group: Microsoft SQL Resource Group: Microsoft Exchange Resource Group: Oracle Microsoft failover cluster provides high availability; shared-nothing cluster model

13 RecoverPoint/Cluster Enabler for Microsoft Failover Cluster
LAN/WAN Private Interconnect Site A Site B This slide is a topology of RecoverPoint/CE for Microsoft failover cluster with Majority Node Set. When a File Share Witness is used, it provides a tie-breaker in an even-node cluster configuration for automated disaster restart operations. The File Share Witness should be deployed at a third site that is immune from a failure, fault, or disaster event impacting either Site A or Site B. File Share Witness with RecoverPoint/CE installed RecoverPoint Cluster nodes with RecoverPoint/CE installed Failover cluster supports 16 nodes with Windows Server 2003/2008 using Majority Node Set with and without File Share Witness

14 RP/CE & Hyper-V Overview
Cluster Enabler 4 supports Hyper-V with failover clusters Failover of the virtual machine (VM) resource RecoverPoint/CE is deployed in the Hyper-V parent partition Cluster relocation is at the VM level Hyper-V Live Migration and Quick Migration—between nodes at the same or different sites Live Migration supported with RecoverPoint CRR/CLR synchronous replication Quick Migration supported with synchronous and asynchronous replication Use for planned maintenance, such as VM relocation for hardware upgrades and software upgrades Use for VM workload redistribution—move VMs between physical hosts This slide provides an overview of the Hyper-V features supported with RecoverPoint/Cluster Enabler 4 with RecoverPoint. Hyper-V configurations can be supported and the Hyper-V features of Live Migration and Quick Migration are also supported with Cluster Enabler 4 with RecoverPoint.

15 Disaster Recovery for Hyper-V
Automated failover operations for Hyper-V environments LAN/WAN Private Interconnect Site A Site B This slide is a topology for Microsoft failover cluster using Hyper-V with Majority Node Set. When a File Share Witness is used, it provides a tie-breaker in a four-node physical cluster running virtual machines on each physical node. The Majority Node Set with File Share Witness should be deployed at a third site that is immune from a failure, fault, or disaster event impacting either site A or site B. Majority Node Set with File Share Witness Prod 1 Target 1 RecoverPoint Target 2 Prod 2 Cluster nodes with RecoverPoint/CE installed Hyper-V with Failover Clusters supports 16 nodes with Windows 2008 R2

16 Hyper-V Live Migration
Planned hardware maintenance on physical server requires moving virtual machine(s) to another physical server Site A Site B Note to Presenter: View in Slide Show mode for animation. This is an example of using Hyper-V Live Migration in a RecoverPoint environment using synchronous replication. Live Migration with asynchronous replication is not supported at this time. Majority Node Set with File Share Witness R1 R2 R2 R1 RecoverPoint CRR synchronous replication

17 RP/CE 4.1.2 (new release) Supported with RP 3.4.2 code
Supports RecoverPoint CLR topology Failover is still limited to CRR copy (can’t failover to CDP) New CE Group Policy “Auto Failover” Provides missing flexibility which was absence in previous “Restricted group” (see next table) No need to perform “RecoverPoint manual failover” on specific use cases Supports Windows Server and Server Core for Windows Server 2008 Up to 8 nodes per site with Windows Server 2003 Up to 16 nodes per site with Windows Server 2008 and Windows Server R2 Up to 16 child partitions with Hyper-V RecoverPoint provides full support for data replication and disaster recovery for clustered applications.

18 Using RecoverPoint/Cluster Enabler
Disaster Scenario Expected Behavior Can we Failover when CG policy is “Restricted Group” “Automatic Failover” Replication link is down (while Heartbeat link is up) No Failover No (Can bypass using “Manual Failover” procedure) Yes (Risk of data consistency/loss due to the unknown replication state) Heartbeat link is down (while replication link is up) Replication & Heartbeat links are down (According to MS Cluster Majority Node set rules) RPA Cluster failure at one site (will fail any attempt to failover or changing cluster group online state) (Can bypass using “Manual Failover” procedure Cluster Node Failure Automatic Failover (According to MS Cluster failover rules) Clustered Application failure HBA Failure Storage failure to Cluster Nodes only (Replication link is up) Storage failure to Cluster Nodes and RPA Cluster (replication will fail) Using RecoverPoint/Cluster Enabler Supports Windows 2003 Cluster Services Supports Windows 2008 Failover Clusters Supported with Hyper-V Repurposing does not work MSCS failover enforces that remote cluster nodes can’t be manipulated All resources of those nodes are not available outside of nodes No access, mounting, multi-masking of LUNS to non-clustered hosts Cluster Enabler 4.1 supports CSV Requires Windows Server 2008 R2 SP1 Support Majority Node Set, other quorum modes not supported Only supports CRR Toleration of CLR will be added 2H’2011 (addresses repurposing issue) RecoverPoint provides full support for data replication and disaster recovery for clustered applications.

19 DEMO: Protecting SQL 2008 using RP/CE (movie)

20 RecoverPoint integration with MS HyperV

21 Supported Hyper-V configurations
HyperV Host Configuration VM per LUN topology CE Supported Recovery Process without CE Recovery Process with CE Clustered HyperV using Cluster Disk Single Yes Manual Image access on DR Host Online Cluster Disk Online Clustered VMs Automated Failover Cluster Node failure, HBA failure, Application failure, Cluster disk failure, Entire Site SAN is down (but will require Manual online to failed cluster disk) Manual Failover Failure of only the replication link, Failure of only the Public or Private Cluster links Clustered HyperV using CSV Disk Multiple No Online CSV Disk N/A (CSV not supported) Standalone HyperV using attached disk Not Relevant Create New VM with no Virtual Disk Attach the replicated VHD file into VM disk (not clustered solution) Clustered HyperV (each VM reside on its own replicated Cluster Disks) Clustered HyperV with CSV volume (multiple VMs reside on single replicated CSV volume) Standalone HyperV host (since CSV is only supported in Cluster, then each VM reside on its own replicated volume)

22 RecoverPoint integration with SQL Server

23 RecoverPoint bookmark type
Pros Cons Periodic/Manual (Crash consistent) DB Recovery is exposed to data corruption/loss might require SQL debug or mounting other bookmark Doesn’t require any application integration VDI using KUTILS (Application consistent) DB data are 100% consistent During Backup the SQL DB details are exported into metadata file to guarantee identical restored DB characteristics Doesn’t exposed to Human error in recovery Cannot combine 3rd party log backups (it utilize FULL VDI Backup command) In recovery – requires metadata file to import SQL DB details The recovered SQL Server cannot consist of existing DB name (new) VSS using KVSS (Application Consistent) DB data are 100% consistent, no need for any metadata file Can recover DB into any name Exposed to Human error in recovery Requires functional SQL VSS Writer (“Healthy” state ) and VSS attributes removal procedure. RecoverPoint provides full support for data replication and disaster recovery for Microsoft Exchange platforms.

24 DEMO #1: SQL VDI Backup/Restore using Kutils
1) Backup using RP Hot Backup command the entire production database 2) Select RecoverPoint Hot Backup bookmark 3) Mount this Bookmark 4) Restore entire database on DR SQL Server

25 DEMO #2: SQL VSS Backup/Restore using KVSS
1) Perform SQL VSS bookmark using KVSS utility on production Host 2) VSS bookmark created successfully 3) Mount this Bookmark

26 4) Online replica volumes
5) Remove Hidden/Read-Only disk attributes 6) Assign drive letters 7) Attach SQL database using SQL Mgmt. Studio

27 EMC RecoverPoint and SQL 2012 “AlwaysOn”

28 Dynamic Business Intelligence and DR with SQL Server 2012 AlwaysOn and EMC RecoverPoint
Local high availability is provided by SQL Server “Denali 2010” AlwaysOn Availability Groups  Local data accessibility for destructive data access to enable Dynamic BI is enabled by RecoverPoint Remote site crash-consistent database copies are available over extended distances with WAN optimization and no SQL Server overhead with RecoverPoint Dynamic Business Intelligence is enabled, whereby rapid copies of large OLTP production databases must be available to business intelligence teams; Within minutes Must have BI SQL Server Instances fully isolated from production environments and networks Supports dynamic BI where databases copies need to be writeable to support dynamic BI lookup scenarios Continue to ensure the production databases copies are intact RecoverPoint provides full support for data replication and disaster recovery for clustered applications.

29

30 SQL Server AlwaysOn and EMC RecoverPoint - The Best of Both Worlds-
Rapid local HA Near-site synchronous replication Distant host replication Read Only Data accessibility for medium BI workloads Any point in time recovery and access to Writable copies of data Rewind-in-time recovery over extended periods of time Long-haul block-level replication Writeable data accessibility for any BI workloads

31 Title Month Year


Download ppt "Replicating Microsoft Applications using EMC RecoverPoint"

Similar presentations


Ads by Google