SnapView Clones Upon completion of this module, you should be able to:

Slides:



Advertisements
Similar presentations
Module – 3 Data protection – raid
Advertisements

Copyright © 2014 EMC Corporation. All Rights Reserved. Basic Network Configuration for File Upon completion of this module, you should be able to: Configure.
5 Copyright © 2005, Oracle. All rights reserved. Managing Database Storage Structures.
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX Snapshot Upon completion of this module, you should be able to: Describe VNX Snapshot operations.
CSCI 3140 Module 8 – Database Recovery Theodore Chiasson Dalhousie University.
Copyright © 2014 EMC Corporation. All Rights Reserved. Data Mover Failover Upon completion of this module, you should be able to: Data Mover Failover Test.
Understand Database Backups and Restore Database Administration Fundamentals LESSON 5.2.
Persistent State of Package Variables. 2 home back first prev next last What Will I Learn? Identify persistent states of package variables Control the.
Module – 11 Local Replication
Module – 12 Remote Replication
Section 3 : Business Continuity Lecture 29. After completing this chapter you will be able to:  Discuss local replication and the possible uses of local.
Module – 7 network-attached storage (NAS)
9 Copyright © Oracle Corporation, All rights reserved. Oracle Recovery Manager Overview and Configuration.
1© Copyright 2012 EMC Corporation. All rights reserved. EMC Solutions are Powered by Intel ® Xeon ® Processor Technology Run Microsoft Better with EMC.
Copyright © 2014 EMC Corporation. All Rights Reserved. Exporting NFS File Systems to UNIX/ESXi Upon completion of this module, you should be able to: Export.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
13 Copyright © Oracle Corporation, All rights reserved. RMAN Complete Recovery.
Configuring CIFS Upon completion of this module, you should be able to: Configure the Data Mover for a Windows environment Create and Join a CIFS Server.
Copyright © 2014 EMC Corporation. All Rights Reserved. Advanced Storage Concepts Upon completion of this module, you should be able to: Describe LUN Migration.
Gorman, Stubbs, & CEP Inc. 1 Introduction to Operating Systems Lesson 12 Windows 2000 Server.
5 Copyright © 2004, Oracle. All rights reserved. Using Recovery Manager.
11 Copyright © Oracle Corporation, All rights reserved. RMAN Backups.
Copyright © 2014 EMC Corporation. All Rights Reserved. Block Storage Provisioning and Management Upon completion of this module, you should be able to:
MODULE – 8 OBJECT-BASED AND UNIFIED STORAGE
15 Copyright © 2005, Oracle. All rights reserved. Performing Database Backups.
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Copyright © 2011 EMC Corporation. All Rights Reserved. MODULE – 6 VIRTUALIZED DATA CENTER – DESKTOP AND APPLICATION 1.
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX SnapSure Upon completion of this module, you should be able to: Describe VNX SnapSure theory.
Module – 4 Intelligent storage system
Copyright © 2014 EMC Corporation. All Rights Reserved. VNX Block Local Replication Principles Upon completion of this module, you should be able to: Explain.
15 Copyright © 2007, Oracle. All rights reserved. Performing Database Backups.
Moodle (Course Management Systems). Managing Your class In this Lecture, we’ll cover course management, including understanding and using roles, arranging.
Copyright © 2014 EMC Corporation. All Rights Reserved. SnapView Snapshot Upon completion of this module, you should be able to: Describe SnapView Snapshot.
18 Copyright © Oracle Corporation, All rights reserved. Workshop.
17 Copyright © Oracle Corporation, All rights reserved. Recovery Catalog Creation and Maintenance.
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.
Copyright © 2014 EMC Corporation. All Rights Reserved. Windows Host Installation and Integration for Block Upon completion of this module, you should be.
Overview Managing a DHCP Database Monitoring DHCP
Module 5: Implementing Group Policy
Copyright © 2014 EMC Corporation. All Rights Reserved. Managing Host Access to Storage Upon completion of this module, you should be able to: Explain Access.
© 2006 EMC Corporation. All rights reserved. Business Continuity: Local Replication Module 4.3.
Copyright Ó Oracle Corporation, All rights reserved Working with Other Canvases.
5 Copyright © 2005, Oracle. All rights reserved. Managing Database Storage Structures.
6 Copyright © 2007, Oracle. All rights reserved. Managing Database Storage Structures.
3 Copyright © 2006, Oracle. All rights reserved. Using Recovery Manager.
12 Copyright © 2009, Oracle. All rights reserved. Managing Backups, Development Changes, and Security.
18 Copyright © 2004, Oracle. All rights reserved. Backup and Recovery Concepts.
2 Copyright © 2007, Oracle. All rights reserved. Configuring for Recoverability.
8 Copyright © 2004, Oracle. All rights reserved. Making the Model Secure.
2 Copyright © 2006, Oracle. All rights reserved. Configuring Recovery Manager.
22 Copyright © 2008, Oracle. All rights reserved. Multi-User Development.
8 Copyright © 2007, Oracle. All rights reserved. Using RMAN to Duplicate a Database.
3 Copyright © 2004, Oracle. All rights reserved. Creating an Oracle Database.
3 Copyright © 2007, Oracle. All rights reserved. Using the RMAN Recovery Catalog.
21 Copyright © 2009, Oracle. All rights reserved. Working with Oracle Business Intelligence Answers.
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
6 Copyright © Oracle Corporation, All rights reserved. Backup and Recovery Overview.
10 Copyright © 2007, Oracle. All rights reserved. Using RMAN Enhancements.
19 Copyright © 2004, Oracle. All rights reserved. Database Backups.
17 Copyright © 2006, Oracle. All rights reserved. Information Publisher.
2 Copyright © 2005, Oracle. All rights reserved. Installing Oracle Software and Creating the Database.
8 Copyright © Oracle Corporation, All rights reserved. Managing Tablespaces and Data files.
Advanced Technical Support (ATS) Americas © 2007 IBM Corporation What is FlashCopy? FlashCopy® is an “instant” T0 (Time 0) copy where the source and target.
19 Copyright © 2008, Oracle. All rights reserved. Security.
Maintaining Windows Server 2008 File Services
Module – 11 Local Replication
Introduction to Operating Systems
A Technical Overview of Microsoft® SQL Server™ 2005 High Availability Beta 2 Matthew Stephen IT Pro Evangelist (SQL Server)
Direct Attached Storage and Introduction to SCSI
Bethesda Cybersecurity Club
Presentation transcript:

SnapView Clones Upon completion of this module, you should be able to: Describe SnapView Clone operations Manage SnapView Clones This module focuses on the purpose of SnapView Clone, its operations, and management. VNX Snapview Clones VNX SnapView Clones

SnapView Clones Lesson 1: SnapView Clone: Theory and Operation During this lesson the following topics are covered: Purpose of SnapView Clones SnapView Clones requirements SnapView Clones managed objects SnapView Clones theory of operations This lesson covers the purpose of SnapView Clone and its operations. VNX Snapview Clones VNX Snapshot

SnapView Clones SnapView Clone – a full copy of a LUN internal to a storage system Population latency Clones take time to populate (synchronize) Data in the LUN must be duplicated Protected No changes will be made to the clone UNLESS the user writes to it 2-way synchronization Clones may be incrementally updated from the source LUN Source LUNs may be incrementally updated from a clone Copies real data Clone must be EXACTLY the same size as Source LUN Unlike SnapView Snapshots, clones are full copies of the source LUN. Since clones allow synchronization in both directions, the clone must be the same size as the Source LUN. Replication software which allows only one-way copies, such as SAN Copy, does not have this restriction. Clones provide users with the ability to create fully populated point-in-time copies of LUNs within a single storage system. Clones are packaged with SnapView and expand SnapView functionality by providing the option to have fully-populated copies (as well as the pointer-based copies of Snapshots). For users familiar with MirrorView, clones can be thought of as mirrors within arrays, as opposed to across arrays. Clones have additional functionality, however, in that they offer the ability to choose which direction the synchronization is to go between source LUN and clone. Clones are also available for read and write access when fractured, unlike secondary mirrors, which have to be promoted, or made accessible via a Snapshot or a clone to allow for data access. Since clones are fully-populated copies of data, they are highly available and can withstand SP or VNX reboots or failures, as well as path failures (provided PowerPath is installed and properly configured). It should be noted that clones are designed for users who want to be able to periodically fracture the LUN copy and then synchronize or reverse synchronize the copy. Users who simply want a mirrored copy for protection of production data would implement RAID 1 or RAID 1/0 LUNs. VNX Snapview Clones VNX SnapView Clones

SnapView Clones Operations Create Clone Group Changes LUN to source LUN Add clone to Clone Group Changes LUN to clone Synchronize Clone Data copied from source LUN to clone Fracture clone (may be a consistent operation) Stops updating of clone with new source LUN data Remove clone Clone becomes an independent LUN Cannot be synchronizing or reverse synchronizing Destroy Clone Group Source LUN becomes an independent LUN Since clones use MirrorView type technology, the rules for image sizing are the same – source LUNs and their clones must be exactly the same size. This slide shows operations which may be performed on clones. The first step is the creation of a Clone Group, which consists of a source LUN and 0 to 8 clones. This operation is not allowed if the Clone Private LUNs (CPLs), discussed later, have not been allocated. Once a Clone Group exists, clones may be added to it. Those clones may then be synchronized and reverse synchronized as desired. A clone which is synchronized may be fractured. This stops writes to the source LUN from being copied to the clone, but maintains the relationship between source LUN and clone. A fractured clone may be made available to a secondary host. A set of clones may be fractured at the same time to ensure data consistency. In that case, updates from the source LUNs to the clones are stopped at the same time and the clones are then fractured. Note that there is no concept of a ‘consistency group’. Clones are managed individually after being consistently fractured. Removal of a clone from a Clone Group turns it back into an ordinary LUN and permanently removes the relationship between the source LUN and that clone. Data on the Clone LUN is not affected; the ability to use the LUN for synchronization or reverse synchronization operations is lost, however. Destroying a Clone Group removes the ability to perform any clone operations on the source LUN. VNX Snapview Clones VNX SnapView Clones

Clone Initial Synchronization Copy contents of source LUN to clone Overwrites clone with source LUN data Host access allowed to source LUN at all times No host access to clone while not fractured Production Host Source LUN Backup Host This slide shows the initial synchronization of the clone. Synchronization is the process of copying data from the source LUN to the clone. Upon creating the association of a clone with a particular source this translates to a full synchronization – all extents (regions) on the source LUN are copied to the clone to provide a completely redundant replica. Subsequent synchronizations involve only a copy of any data that has changed on the source since the previous synchromization – overwriting any writes that have ocurred directly to the clone from any secondary server that had been accessing it while the clone was fractured. It is essentially an update for the clone. Once synchronized with the incremental updates from the source LUN, the clone is ready to be fractured again to maintain the relevant poin-in-time reference. Source LUN access is allowed during synchronization. The clone, however, is inaccessible during synchronization, and attempted host I/Os are rejected Clone 1 Clone 2 . . . Clone 8 X VNX Snapview Clones VNX SnapView Clones

Clone Private LUN (CPL) and Fracture Log Bitmap stored in SP memory Tracks modified extents between source LUN and each clone Allows incremental resynchronization – in either direction Determines extent size 1 block for each GB of source LUN size, with a minimum of 128 KB Example: 512 GB clone. Extent size = 512 blocks = 256 KB Private LUN for each SP Must be 1 GB or greater Used for all clones owned by the SP No clone operations allowed until CPL created Contains persistent Fracture Logs Classic LUNS only The Clone Private LUN contains the Fracture Log which allows for incremental resynchronization of data. This reduces the time taken to resynchronize and allows customers to better utilize clone functionality. The term extent mentioned in the previous slide and above is the granularity at which changes are tracked. This granularity depends on the size of the source LUN. The extent size is 1 block for each GB of source LUN size, with a minimum size of 128 kB. This means that up to a source LUN size of 256 GB, the extent size will be 128 kB. A source LUN of 512 GB will therefore have an extent size of 512 GB = 256 KB. Because the Fracture Log is stored on disk in the clone Private LUN, it is persistent and can withstand SP reboots or failures and storage system failures or power failures. This allows customers to benefit from the incremental resynchronization feature, even in the case of a complete system failure. A Clone Private LUN is a Classic LUN of at least 1 GB, that is allocated to an SP and must be created before any other clone operations can commence. Note that any space above the required minimum is not used by SnapView. VNX Snapview Clones VNX SnapView Clones

Clone Reverse Synchronization Restore source LUN with contents of clone Overwrites source LUN with clone data Host access allowed to source LUN No host access to clone Source LUN instantly appears to contain clone data Production Host Source LUN X Backup Host Reverse Synchronization allows clone content to be copied from the clone to the source LUN after the clone has been initially synchronized. SnapView implements Instant Restore, a feature which allows Copy-on-Demand, or out-of-sequence copies. This means that as soon as the Reverse Synchronization begins, the source LUN seems to be identical to the clone. The Source LUN must briefly be taken off-line before the reverse synchronization starts; this allows the host to see the new data structure. During both synchronization and reverse synchromization, server I/Os (read and write) can continue to the source. The clone, however, is not accessible for secondary server I/Os during either synchronizations or reverse synchronizations; the user must ensure that all server access to the clone is stopped (this includes ensuring that all cached data on the server is flushed to the clone) prior to initiating a synchronization or a reverse synchronization. X X Clone 1 Clone 2 . . . Clone 8 X VNX Snapview Clones VNX SnapView Clones

Reverse Synchronization and Protected Restore Non-Protected Restore Host → source LUN writes mirrored to clone When Reverse Synchronization completes: Reverse-synchronized clone remains unfractured Other clones remain fractured Protected Restore Host → source LUN writes not mirrored to clone All clones are fractured Protects against source LUN corruption Configure at the individual clone level Must be globally enabled first with ‘Allow Protected Restore” checkbox A protection option during a reverse synchronization is to enable the protected restore option dor the clone. The protected restore option ensures that when the reverse synchronization begins, the state of the clone is maintained. When protected restore is not explicitly selected for a clone, a normal restore occurs. The goal of a normal restore is to send the contents of the clone to the source LUN, while allowing updates to both, and to bring the clone and the source LUN to an identical data state. To do that, writes coming into the source LUN are mirrored over to the clone that is performing the reverse synchronization. Also, once the reverse synchronization completes, the clone is not fractured from the source LUN. On the other hand, when restoring a source LUN from a golden copy clone, that golden copy needs to remain as-is. This means that the user wants to be sure that source LUN updates do not affect the contents of the clone. So, for a protected restore, the writes coming into the source LUN are NOT mirrored to the protected clone. And, once the reverse synchronization completes, the clone is fractured from the source LUN to prevent updates to the clone. VNX Snapview Clones VNX SnapView Clones

Trespassing Source LUNs and Clones Source LUN Trespass Peer SP acquires clones Peer SP acquires Fracture Logs (through CPL) Peer SP resumes any synchronization operations Clone Trespass User or host can initiate trespass of a fractured clone Desirable for load balancing across SPs Fractured clone need not be on same SP as source LUN Synchronization trespasses clone as necessary A clone LUN can be assigned to the alternate SP from the source LUN; however, the clone LUN will be trespassed during the clone synchronization process, and returned to its SP when it is fractured. Trespassing a clone is only allowed after it is fractured. When it is in a non-fractured relationship, it will be trespassed if its source LUN is trespassed. If the source LUN is trespassed, any clone that is not fractured trespasses along with it. If the clone is fractured, it is treated like a regular LUN, and trespasses as required. If the clone was synchronizing when it was trespassed, the peer SP continues the synchronization. Information about differences in the data state between source LUN and clone is kept in the Clone Private LUN (CPL). The CPLs are always identical and ensure that each SP has the same view of the clones. VNX Snapview Clones VNX SnapView Clones

SnapView Clones Lesson 1: Summary During this lesson the following topics were covered: Purpose of SnapView Clones SnapView Clones requirements SnapView Clones managed objects SnapView Clones theory of operations This lesson covered the purpose of SnapView Clone and its requirements. VNX Snapview Clones VNX SnapView Clones

SnapView Clones Lesson 2: Clone Configuration During this lesson the following topics are covered: Clone configuration Creating and populating Clone Groups This lesson covers the purpose of SnapView Clone and its operations. VNX Snapview Clones VNX Snapshot

Configuring the Clone Private LUNs Clone Private LUNs (CPLs) are configured using the Configure Clone Settings menu from the Data Protection > Clones main menu bar. Note in the example that only Classic LUNs (LUNs configured in RAID Group) are eligible to be CPLs. The dialog allows the addition of two CPLs (no more and no fewer) and the global enabling of the Protected Restore feature. If the feature is globally enabled, individual clones may then be assigned to the Protected Restore property. Select the LUNS and click Add to move them to the lower window or simple double-click on the LUN. When the Clone Private LUNs have been created, they are displayed under the Storage > LUNs menu by selecting Private LUNs . Note that the CPLs must be 1 GB or larger. VNX Snapview Clones VNX SnapView Clones

Creating a Clone Group Create the Clone Group from the Data Protection > Clones and select Create Clone Group. A Clone group must be given a name and a LUN must be selected from the list of LUNs to be Cloned window. Click Yes to confirm the operation. Once the Clone Group is created, it is displayed under the Source LUNs tab from the Data Protection> Clones menu. VNX Snapview Clones VNX SnapView Clones

Adding a Clone to a Clone Group From the Data Protection > Clones menu add a single clone by selecting the Source LUNs tab right-clicking the source LUN and selecting Add Clone. The Add Clone window displays LUNs that meet the requirements for clone creation. For example, the slide shows all the LUNs that are the same capacity as the source LUN (6 GB). All LUN types are eligible; Thin, thick and Classic LUNs. The operation asks for a confirmation and then returns a Success message. This operation must be performed for each clone in turn. Choose the Clone LUN which holds the clone data and set parameters for the clone. VNX Snapview Clones VNX SnapView Clones

Clone Group Properties Once created, the clone is visible under the Clone LUNs tab. Highlight the Clone and select the Properties tab or right-click the Clone and select Properties from the menu. Only the second digit of the clone number (here 0100000000000000) is significant. It shows which clone this is within the group. The following operations may be performed on a Clone Group: Add Clone- Clones are added one at a time Destroy Clone Group - Only if no clones are in it Properties - View Properties of the Clone Group VNX Snapview Clones VNX SnapView Clones

Clone Operations From the Source LUNs tab, expand the source LUN container and right-click a Clone LUN to view the available options. Note this can be done from the Clone LUNs tab as well. These operations may be performed on a clone: Synchronize - only if clone is fractured Reverse Synchronize - only if clone is fractured Fracture - only if in a non-fractured state Delete - permanently destroy source LUN-Clone relationship Properties - view properties of the Clone Group VNX Snapview Clones VNX SnapView Clones

Clone Operations - Consistent Fracture A consistent fracture operation enables users to establish a point-in-time set of replicas that maintain write-ordered consistency, which in turn, allows users to have a restartable point-in-time replicas of multi-LUN datasets. This can be useful in database environments where users have multiple LUNs with related data. A Clone consistent fracture refers to a set of clones belonging to write-ordered dependent source LUNs. The associated source LUN for each clone must be unique meaning users cannot perform a consistent fracture on multiple clones belonging to the same source LUN. The example shows two clones selected to be Fractured each clone belongs to a unique source LUN. Note: highlight the first clone and use the Ctrl key to select the second (or additional) clones. As with other user initiated fractures, clones appear as Administratively Fractured If a failure occurs during a Consistent Fracture operation and one of the clones cannot be fractured, all other clones will be queued for resynchronization. VNX Snapview Clones VNX SnapView Clones

Clone Time of Fracture Allows the user to know the date/time when the Clones images were administratively fractured Clones will stamp the time ONLY when the clones were administrative fractured by the user and the images were a point in time copy (consistent state) The time will be stored persistently inside the current clones private area in PSM All Clones involved in a Consistent Fracture operation reports the same time of fracture Private LUN for each SP Clone Time of Fracture allows the user to know the date/time when the Clone’s images were administratively fractured. Clones stamp the time ONLY when the clones were administratively fractured by the user and the images were a point-in-time copy (consistent state). The time is stored persistently inside the current clones private area in PSM. All Clones involved in a Consistent Fracture operation will report the same time of fracture. You can view the time of fracture by issuing the cli -listclone command and using either the -all or the -timeoffracture option. The time of fracture will be displayed in the following cases: Clone was administratively fractured and its state is consistent. The Clone is fractured because the reverse synchronization (protected restore enabled) was completed. The clone is administratively fractured (including media failures) during a reverse synchronization (protected restore enabled). The time of fracture will not be displayed when the state isn’t administratively fractured and/or the time of the fracture isn’t stored. Specific examples: The Clone is performing a synchronization (synchronization or reverse synchronization). The condition of the Clone is Normal. The clones were fractured because a reverse synchronization has started within the CloneGroup. The clone’s state is out of sync or reverse-out-of-sync (protected restore disabled). The clones were fractured due to a media failure (except by the protected reverse synchronization case). VNX Snapview Clones VNX SnapView Clones

Clone Configuration Wizard There is also an option to configure clone using the Clone Wizard. Selecting Clone Wizard from the Data Protection > Clones menu launches a wizard that steps through the tasks shown above. Each step is designed to be simple. If no Clone Private LUNs have been configured, for example, the wizard creates new LUNs. The Clone Private LUNs are not explicitly mentioned in the wizard. Instead, they are configured in the final build phase. VNX Snapview Clones VNX SnapView Clones

SnapView Clones Lesson 2: Summary During this lesson the following topics were covered: Clone Private LUN configuration Clone operations This lesson covered the SnapView Clone and its operations. VNX Snapview Clones VNX SnapView Clones

Summary Key points covered in this module: SnapView Clone is a full copy of a LUN internal to a VNX storage system Clones take time to duplicate data from source Clones must be exactly the same size This module covered the purpose of SnapView Clone, its operations, and considerations. VNX Snapview Clones VNX SnapView Clones

VNX Snapview Clones VNX SnapView Clones