Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Open Source approach to replication and recovery.

Similar presentations


Presentation on theme: "An Open Source approach to replication and recovery."— Presentation transcript:

1 An Open Source approach to replication and recovery.

2 Current Challenges: Miniscule IT Budgets. Backup Tapes don’t meet capacity needs or are costly. Most RAIDs don’t protect from certain data loss. Quick recovery of virtual disks and OS. Stuck with same vendor for offsite storage.

3 Vendor Specific Solutions Costly; upwards of $60,000 for infrastructure. Offers features only available to like systems. Choose Two out of Three: Fast, Reliable, Cheap. Yearly support costs can damper most IT budgets. Typical useable storage capacity: 2.8TB with a single shelf, the rest for snapshots and filesystem overhead.

4 An Open Source Solution Open Solaris with ZFS and iscsitarget software. Relatively inexpensive compared to 3 rd party vendors Can mix and match hardware. ZFS: Virtually unlimited capacity! Technical creativity when building scripts.

5 Cons about Opensolaris iSCSI Not supported by VMware. (yet) OS and Filesystem Learning curve Limited to 1gb/s of bandwidth per SAN* (until 10gb/s is released) No support for MTU 9000 (Jumbo Frames)

6 Necessary Tools for success. ZFS File System; /sbin/zpool and /sbin/zfs; snapshots Iscsitadm (pkg add) SSH keygen/pgp A replication script (e.g. zfs-replicate.sh) Cron Mail Enable Vmware LVM/snapshot

7 What is Zpool/ZFS? ZFS is a file system designed by Sun Microsystems for the Solaris Operating System. Features include support for high storage capacities (16 exabytes per pool or 16 million terrabytes), snapshots and copy-on- write clones, continuous integrity checking (256bit CRC checks of every block) and automatic repair (scrubbing), RAID-Z and ACLs. (Found on Wiki) http://opensolaris.org/os/community/zfs/whatis/ RAID 0 – 5: No protection against silent disk corruption and bit rot. http://blogs.sun.com/bonwick/entry/raid_z http://blogs.sun.com/bonwick/entry/raid_z Zpool: disk pool creation, iostatus, and health display (build RAIDZ, RAIDZ2, Jbods, stripes, mirrors, striped mirrors, etc…)

8 Open Solaris iscsitarget Uses industry standard iscsi target and initiator calls Extremely easy to use and seamlessly integrates with Zpool/ZFS Allows for the creation of soft provisioned disks. ACL support, deny unwanted initiators. Iscsitadm list targets –v : details of each currently active targets

9 Replication Script; Mail; Cron Many available on the internet Modified a script from the web (author unknown) to work according to my requirements. Still in progress but works. (zfs-replicate.sh) Use Cron to execute zfs-replicate on a schedule. Mail the results to your disk admins.

10 Vmware Enterprise Configure VMWare for iSCSI support Add targets to Vmware host adapters. Enable LVM snapshot support: GUI far more simpler than console.

11 Snapshots: How to? Reserve up to 40% of maximum pool capacity for snapshots, i.e. 10TB pool, 6TB data, 4TB snapshots. Admin discretion and may be less depending on LUN configuration. Snapshots are the size of the used capacity of the LUN/partition. Making a snapshot is a breeze! /sbin/zfs snapshot pool/partition@snapshot Send the snapshot over to the remote site. /sbin/zfs send pool/partition@snapshot | ssh username@host \ /sbin/zfs recv pool/partition

12 Replication types ZFS send/recv with SSH (avg. speed: 25MB/s at 256bit encryption) Slowest replication, doesn’t require dedicated network and secure. Different algorithms provide different data throughput. Stunnel may be quicker! ZFS send/recv with mbuffer (speeds: 80 to 120MB/s Avg: 360Gig/hr) lev@siscsi-sas:~$ zfs send sd/Linux@1 | mbuffer -b1024k -m 1500m -O 172.32.66.10:9999 ;LOCAL SITE lev@piscsi-sas:~$ mbuffer -m 1500M -s1024k -I 172.32.66.11:9999 | zfs recv pdrive/Linux ;REMOTE SITE http://www.maier-komor.de/mbuffer.html Must have enough system memory (+4GB) to support large buffers. Non encrypted, requires network paths to be safe. ZFS send/recv with netcat/rsh (avg. speed: 35MB/s) Insecure data copy, just for comparison.

13 Data Recovery Activate offsite VM ESX Host. Access offsite disk storage LUNs. add iSCSI remote host to Virtual Center Re-add VM Disk (LUNs) to offsite VM Infrastructure. GUI: Advanced Features: LVM: EnableResignature CLI: /proc/vmware/config/LVM/EnableResignature Resignatured LUNs will appear automatically in the storage section of Virtual Center. Re-add VM Guests from resignatured LUNs Offsite LUNs can be added to primary VM site for data recovery or testing...

14 Questions? Tano Simonian Email: tanniel@ucla.edu (ofc) 310 794 9669


Download ppt "An Open Source approach to replication and recovery."

Similar presentations


Ads by Google