Presentation on theme: "2009 Dusan Baljevic HP-UX Dynamic Root Disk, Solaris Live Upgrade and AIX Multibos Dusan Baljevic Sydney, Australia."— Presentation transcript:
2009 Dusan Baljevic HP-UX Dynamic Root Disk, Solaris Live Upgrade and AIX Multibos Dusan Baljevic Sydney, Australia
August 7, 2009 2 Cloning in Major Unix and Linux Releases AIX Alternate Root and Multibos (AIX 5.3 and above) HP-UX Dynamic Root Disk (DRD) Linux Mondo Rescue, Clonezilla Solaris Live Upgrade
August 7, 2009 3 HP-UX Dynamic Root Disk Features Dynamic Root Disk (DRD) provides the ability to clone an HP-UX system image to an inactive disk. Supported on HP PA-RISC and Itanium-based systems. Supported on hard partitions (nPars), virtual partitions (vPars), and Integrity Virtual Machines (Integrity VMs), running the following operating systems with roots managed by the following Volume Managers (except as specifically noted for rehosting): o HP-UX 11i Version 2 (11.23) September 2004 or later o HP-UX 11i Version 3 (11.31) o LVM (all O/S releases supported by DRD) o VxVM 4.1 o VxVM 5.0
August 7, 2009 4 HP-UX DRD Benefit: Minimizing Planned Downtime Without DRD: Software management may require extended downtime With DRD: Install/remove software on the clone while applications continue running lvol1 lvol2 lvol3 Original vg00 (inactive) boot disk boot mirror lvol1 lvol2 lvol3 cloned vg00 ( active/patched ) clone disk clone mirror lvol1 lvol2 lvol3 Original vg00 (active) boot disk boot mirror lvol1 lvol2 lvol3 cloned vg00 ( inactive/patched ) clone disk clone mirror lvol1 lvol2 lvol3 Install patches on the clone; applications remain running Activate the clone to make changes take effect
August 7, 2009 5 HP-UX Dynamic Root Disk Features - continued Product : DynRootDisk Version: A.184.108.40.206 (B.11.xx.A.3.4.x will be the current version number as of September 2009) The target disk must be a single physical disk, or SAN LUN. The target disk must be large enough to hold all of the root volume file systems. DRD allows the cloning of the root volume group even if the master O/S is spread across multiple disks (it is a one-way, many-to-one operation). On Itanium servers, all partitions are created; EFI and HP-UX partitions are copied. This release of DRD does not copy the HPSP partition. Copy of lvmtab on the cloned image is modified by the clone operation to contain information that will reflect the desired volume groups when the clone is booted.
August 7, 2009 6 HP-UX Dynamic Root Disk Features - continued Only the contents of vg00 are copied. Due to system calls DRD depends on, DRD expects legacy Device Special Files (DSFs) to be present and the legacy naming model to be enabled on HP-UX 11i v3 servers. HP recommends only partial migration to persistent DSFs be performed. If the disk is currently in use by another volume group that is visible on the system, the disk will not be used. If the disk contains LVM, VxVM, or boot records but is not in use, one must use the “-x overwrite” option to tell DRD to overwrite the disk. Already-created clones will contain boot records; the drd status command will show the disk that is currently in use as an inactive system image.
August 7, 2009 7 HP-UX Dynamic Root Disk Features - continued All DRD processes, including “drd clone” and “drd runcmd”, can be safely interrupted issuing Control-C (SIGINT) from the controlling terminal or by issuing kill –HUP (SIGHUP). This action causes DRD to abort processing. Do not interrupt DRD using the kill -9 command (SIGKILL), which fails to abort safely and does not perform cleanup. Refer to the “Known Issues” list on the DRD web page (http://www.hp.com/go/DRD) for cleanup instructions after drd runcmd is interrupted. The Ignite server will only be aware of the clone if it is mounted during a make_*_recovery operation.
August 7, 2009 8 HP-UX Dynamic Root Disk Features - continued DRD does not provide a mechanism for resizing file systems during a clone operation. After the clone is created, one can manually change file system sizes on the inactive system without an immediate reboot: 1. The whitepaper, “Dynamic Root Disk: Quick Start & Best Practices” describes resizing file systems other than /stand. * 2. The whitepaper “Dynamic Root Disk: Quick Start & Best Practices” describes resizing the boot (/stand) file system on an inactive system image. One can avoid multiple mounts and unmounts by using “drd mount” to mount the inactive system image before the first runcmd operation and “drd umount” to unmount the inactive system image after the last runcmd operation. ** Supports root volume groups with any name (prior to version A.3.0, only vg00 was possible).
August 7, 2009 9 HP-UX Dynamic Root Disk Commands The basic DRD commands are: drd clone drd runcmd drd activate drd deactivate drd mount drd umount drd status drd rehost drd unrehost
August 7, 2009 10 HP-UX Dynamic Root Disk Commands - continued “drd runcmd” can run specific Software Distributor (SD) commands on the inactive system image only: swinstall swremove swlist swmodify swverify swjob Three other commands can be executed by the drd runcmd command: view used to view logs produced by commands that were executed by drd runcmd. kctune used to modify kernel parameters. update-ux performs v3 to v3 OE updates
August 7, 2009 11 HP-UX Dynamic Root Disk Features – Dry Run A simple mechanism for determining if a chosen target disk is sufficiently large is to run a preview: # drd clone -p -v -t blockDSF is of the form: * HP-UX 11i v2: /dev/dsk/cXtXdX * HP-UX 11i v3: /dev/disk/diskX The preview operation includes the disk space analysis needed to see if the target disk is sufficiently large.
August 7, 2009 12 HP-UX Dynamic Root Disk versus Ignite- UX DRD has several advantages over Ignite-UX net and tape images: * No tape drive is needed, * No impact on network performance will occur, * No security issues of transferring data across the network. Mirror Disk/UX keeps an "always up-to-date" image of the booted system. DRD provides a "point-in-time“ image. The booted system and the clone may then diverge due to changes to either one. Keeping the clone unchanged is the Recovery scenario. DRD is not available for HP-UX 11.11, which limits options on those systems.
August 7, 2009 13 HP-UX Dynamic Root Disk Features - continued Dynamic Root Disk (DRD) provides ability to clone an HP- UX system image to an inactive disk, and then: * Perform system maintenance on the clone while the HP- UX 11i system is online. * Reboot during off-hours - significantly reducing system downtime. * Utilize the clone for system recovery, if needed. * Rehost the clone on another system for testing or provisioning purposes—on VMs or blades utilizing Virtual Connect, HP-UX 11i v3 LVM only; VMs with HP-UX 11i v2 LVM only. * Perform an OE Update on the clone from an older version of HP-UX 11i v3 to HP-UX 11i v3 update 4 or later.
August 7, 2009 14 HP-UX – Dynamic Root Disk and /etc/bootconf Errors in /stand/bootconf can make the command drd deactivate to fail. * (This is no longer true in the current release) The /stand/bootconf file on the booted system should contain device files for just the booted disk and any of its mirrors not the clone target. The /stand/bootconf file that is created on the clone target WILL contain the device file of the target itself (or, on an IPF system, the device file of the HP-UX partition of the target).
August 7, 2009 15 HP-UX – Dynamic Root Disk – Rehosting The initial implementation of drd rehost only supports rehosting of an LVM-managed root volume group on an Integrity virtual machine to another Integrity virtual machine, or an LVM-managed root volume group on a Blade with Virtual Connect I/O to another such Blade. The rehost command does not enforce the restriction to blades and VMs, but other use of this command is not officially supported. As of version A.3.3, rehosting support for HP-UX 11i v2 has been added.
August 7, 2009 16 HP-UX – Dynamic Root Disk – Rehosting on HP-UX 11.31 After the clone and system information file have been created, the “drd rehost” command can be used to check the syntax of the system information file and copy it to /EFI/HPUX/SYSINFO.TXT in preparation for processing by auto_parms(1M) during the boot of the image. The following example uses the /var/opt/drd/tmp/newhost.txt system information file: SYSINFO_HOSTNAME=myhost SYSINFO_MAC_ADDRESS=0x0017A451E718 SYSINFO_DHCP_ENABLE=0 SYSINFO_IP_ADDRESS=220.127.116.11 SYSINFO_SUBNET_MASK=255.255.255.0 SYSINFO_ROUTE_GATEWAY=18.104.22.168 SYSINFO_ROUTE_DESTINATION=default SYSINFO_ROUTE_COUNT=1
August 7, 2009 17 HP-UX – Dynamic Root Disk – Rehosting on HP-UX 11.31 - continued To check the syntax of the system information file, without copying it to the /EFI/HPUX/SYSINFO.TXT file, use the preview option of the drd rehost command: # drd rehost –p –f \ /var/opt/drd/tmp/newhost.txt To copy it to the /EFI/HPUX/SYSINFO.TXT file, use the following command: # drd rehost –f /var/opt/drd/tmp/newhost.txt
August 7, 2009 18 HP-UX – Dynamic Root Disk Examples # drd clone -t /dev/disk/disk8 -x overwrite=true ======= 07/02/08 13:09:41 EST BEGIN Clone System Image (user=root) (jobid=syd59) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning * Creating New File Systems * Copying File Systems To New System Image * Making New System Image Bootable * Unmounting New System Image Clone ======= 07/02/08 13:42:57 EST END Clone System Image succeeded. (user=root) (jobid=syd59)
August 7, 2009 19 HP-UX – Dynamic Root Disk Examples - continued # drd status ======= 07/02/08 13:45:42 EST BEGIN Displaying DRD Clone Image Information (user=root) (jobid=syd59) * Clone Disk: /dev/disk/disk8 * Clone EFI Partition: Boot loader and AUTO file present * Clone Creation Date: 07/02/08 13:09:46 EST * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk7 * Original EFI Partition: Boot loader and AUTO file present * Booted Disk: Original Disk (/dev/disk/disk7) * Activated Disk: Original Disk (/dev/disk/disk7) ======= 07/02/08 13:45:51 EST END Displaying DRD Clone Image Information succeeded. (user=root) (jobid=syd59)
August 7, 2009 20 HP-UX – Dynamic Root Disk Examples - continued # drd activate ======= 07/02/08 13:48:03 EST BEGIN Activate Inactive System Image (user=root) (jobid=syd59) * Checking for Valid Inactive System Image * Reading Current System Information * Locating Inactive System Image * Determining Bootpath Status * Primary bootpath : 0/1/1/0.0x1.0x0 before activate. * Primary bootpath : 0/1/1/1.0x2.0x0 after activate. * Alternate bootpath : 0/1/1/1.0x2.0x0 before activate. * Alternate bootpath : 0/1/1/1.0x2.0x0 after activate. * HA Alternate bootpath : 0/1/1/0.0x1.0x0 before activate. * HA Alternate bootpath : 0/1/1/0.0x1.0x0 after activate. * Activating Inactive System Image ======= 07/02/08 13:48:15 EST END Activate Inactive System Image succeeded. (user=root) (jobid=syd59)
August 7, 2009 23 HP-UX – Dynamic Root Disk – Serial Patch Installation Example # swcopy -s /tmp/PHCO_38159.depot \* @ /var/opt/mx/depot11/PHCO_38159.dir # drd runcmd swinstall -s \ /var/opt/mx/depot11/PHCO_38159.dir PHCO_38159
August 7, 200924 HP-UX – Dynamic Root Disk update-ux Issue * When executing “drd runcmd update-ux” on the inactive DRD system image, the command errors: ERROR: The expected depot does not exist at " " In order to use a directory depot on the active system image, you will need to create a loopback mount to access the depot.
August 7, 200925 HP-UX – Dynamic Root Disk update-ux Issue - continued Issue Resolution The following steps should be followed in order to update the clone from a directory depot that resides on the active system image. The steps must executed as root, in this order: 1) Mount the clone using “drd mount” 2) Make the directory on the clone and loopback mount the depot. The directory on the clone and the source depot must have the same name, in this case “/var/depots/0909_DCOE”, however the name can be whatever you chose: # mkdir - p /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # mount -F lofs /var/depots/0909_DCOE /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # drd runcmd update-ux -s /var/depots/0909_DCOE
August 7, 200926 HP-UX – Dynamic Root Disk update-ux Issue - continued 3) Once your update has completed, unmount the loopback mount and then unmount the clone # umount –F lofs /var/depots/0909_DCOE /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # drd umount Updates from multiple-DVD Media Updates directly from media are not supported for DRD updates. In order to update from media, you must copy the contents to a directory depot either on a remote server (easiest method) or to a directory on the active system. If it must be on the active system image you must first copy the media’s contents to a directory depot and then create the clone. If you already have a clone, you can copy the depot and then loopback mount that depot to the clone (see instructions above).
August 7, 200927 HP-UX – Dynamic Root Disk update-ux Issue - continued To copy the software from the DVD’s, make a directory on a remote system or the active system image; mount the DVD media and swcopy its contents into the newly created directory. Unmount the first disk and insert the second DVD to copy its contents into the directory. # mkdir –p /var/software_depot/DCOE-DVD # mount /dev/disk/diskX /cdrom # swcopy -s /cdrom –x enforce_dependencies=false \* @/var/software_depot/DCOE-DVD # umount /cdrom # mount /dev/disk/diskX /cdrom // this is DVD 2 # swcopy -s /cdrom –x enforce_dependencies=false \* @/var/software_depot/DCOE-DVD
August 7, 200928 HP-UX – Dynamic Root Disk update-ux Issue - continued If the depot resides on a remote server (a system other than the one to be updated), proceed with the “drd runcmd update-ux” command and specify the location as the argument of the “-s” parameter: # drd runcmd update-ux -s :/var/software_depot/DCOE-DVD If the depot resides in the root group of the system to be cloned, and the clone has not yet been created, create the clone and issue the “drd runcmd update-ux “ command, specifying the location of the depot as it appears on the booted system: # drd runcmd update-ux –s /var/software_depot/DCOE-DVD If the depot resides on the system to be updated, in a location other than the root group, or if the clone has already been created, use the loopback mount instructions above.
August 7, 2009 29 Solaris Live Upgrade Features Live upgrade is a feature of Solaris (since version 2.6) that allows the operating system to be cloned to an offline partition (or partitions), which can then be upgraded with new O/S patches, software, or even a new version of the operating system. The system administrator can then reboot the system on the newly upgraded partition. In case of problems, it is easy to revert back to the original partition/version via a single live upgrade command followed by a reboot. Live upgrade is especially useful because Sun does not officially support installing O/S patches to active partitions - patching while in single user mode or to a non-active live upgrade partition.
August 7, 2009 30 Solaris Live Upgrade Features - continued Live Upgrade requires multiple partitions on the boot drive – one set of partitions is "active" and the other is "inactive“) or on separate drives. These sets of partitions are "boot environments“ (BEs). A slice where the root (/) file system is to be copied must be selected. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following: * Must be a slice from which the system can boot. * Must meet the recommended minimum size. * Cannot be a Veritas VxVM volume or a Solstice DiskSuite metadevice. * Can be on different physical disks or the same disk as the active root file system. * For sun4c and sun4m, the root file system must be less than 2 GB.
August 7, 2009 31 Solaris Live Upgrade Features - continued The swap slice cannot be in use by any boot environment except the current boot environment or if the “-s” option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment whether the slice contains a swap, UFS, or any other file system. Typically, each boot environment requires a minimum of 350 to 800 MB of disk space, depending on the system software configuration. When viewing the character interface remotely, such as over a tip line, set the TERM environment variable to VT220. Also, when using the Common Desktop Environment, set the value of the TERM variable to dtterm, rather than xterm.
August 7, 2009 32 Solaris Live Upgrade Features - continued lucreate command allows you to include or exclude specific files and directories when creating a new BE. Include files and directories with: -y include option -Y include_list_file option items with a leading + in the file used with the -z filter_list option Exclude files and directories with: -x exclude option -f exclude_list_file option items with a leading – in the file used with the - z filter_list option
August 7, 2009 33 Solaris Live Upgrade and Special Files Files can change in the original boot environment (BE) after the BE is created but NOT YET activated. On the first boot of a BE, data is copied from the source BE. The list to copy is in /etc/lu/synclist. Example: /etc/default/passwd OVERWRITE /etc/dfs OVERWRITE /var/log/syslog APPEND /var/adm/messages APPEND
August 7, 2009 34 Solaris Live Upgrade Examples The upgrade process of the new BE can be done in several ways (local, net, CD-ROM, flash). All four of these are done the same way except each one you specify a different path to the image through the -s flag. Examples: Local file: # luupgrade -u -n solenv2 -s /Solaris_10/path/to/os_image Net: # luupgrade -u -n solenv2 -s /net/Solaris_10/path/to/os_image CD-ROM: # luupgrade -u -n solenv2 -s /cdrom/Solaris_10/path/to/os_image Flash: # luupgrade -u -n solenv2 -s /path/to/flash.flar
August 7, 2009 35 Solaris Live Upgrade Examples # lucompare BE2 Determining the configuration of BE2... < BE1 > BE2 Processing Global Zone Comparing /... Links differ 01 < /:root:root:33:16877:DIR: 02 > /:root:root:30:16877:DIR: Sizes differ 01 < /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76550144: 02 > /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76922880:...
August 7, 2009 40 Clone Commands Compared TaskHP-UX DRDSolaris Live Upgrade Display BE/System Image drd statuslucurr Delete BEN/A *ludelete Add or resync data in BE N/A **lumake Set or display BE description N/Aludesc Mount BE file systems drd mountlumount Unmount BE file system drd umountluumount
August 7, 2009 41 Clone Commands Compared TaskHP-UX DRDSolaris Live Upgrade Rename BEN/Alurename Install software and patches into BE drd runcmd swinstall drd runcmd update-ux luupgrade List BE configuration N/Alufslist TUIN/Alu
August 7, 2009 42 Clone Commands Compared TaskHP-UX DRDSolaris Live Upgrade Rehostingdrd rehostN/A Modify kernel tunables Drd runcmd kctuneN/A
AIX Alt_disk_install The AIX alt_disk_install command allows a root sysadmin to create an alternate rootvg on another set of disk drives. The alternate rootvg can be configured by restoring a mksysb image to it while AIX continues to run from the primary rootvg, or the primary rootvg can be "cloned" to the alternate rootvg and updates and fixes can then be installed on the alternate rootvg while AIX continues to run. When the system admin is ready, AIX can be rebooted from the alternate rootvg disks. Changes can be backed out by rebooting AIX from the original primary rootvg. In AIX v.5.3, alt_disk_install has been replaced by alt_disk_copy alt_disk_mksysb alt_rootvg_op The alt_disk_install will continue to ship as a wrapper to the new commands, but it will not support any new functions, flags, or features.
AIX Alt_disk_install Examples Copy the current rootvg to an alternate disk. The following example shows how to clone the rootvg to hdisk1: # alt_disk_copy -d hdisk1 Copy rootvg (hdisk1) to hdisk0, and then apply the updates to hdisk0: # alt_disk_copy -d hdisk0 -b update_all -l
AIX Alt_disk_install Examples Copy the current rootvg to two alternate disks: # alt_disk_copy -d hdisk2 hdisk3 -O …assuming that hdisk2 and hdisk3 are the targets on which the copy should be placed. Note that the -O flag is required when "cloning" (when planning to boot the rootvg copy on another LPAR or server), but can be detrimental when making a copy which will be booted on the same LPAR or server. Before taking the target disks away from the existing AIX image, run command: # alt_rootvg_op -X If a rootvg copy has been made for use on the same LPAR/server as the original rootvg (without the - O flag on alt_disk_copy ), System Management Services can be used to switch between the primary and backup AIX rootvgs by shutting AIX down, booting to SMS mode, and selecting the disks from which to boot.
August 7, 2009 46 AIX Multibos Features multibos command (AIX 5.3 ML3) provides dual AIX boot from the same rootvg. One can run production on one boot image while installing, customizing or updating the other. This is similar to AIX alt-disk-install, with one major difference: in alt-disk-install the boot images must reside on separate disks and separate rootvg's. The multibos capability allows both O/S images to reside on the same disk/rootvg.
August 7, 2009 48 AIX Multibos Features - continued The multibos command allows the root level administrator to create multiple instances of AIX on the same rootvg. The multibos setup operation creates a standby Base Operating System (BOS) that boots from a distinct boot logical volume (BLV). This creates two bootable sets of BOS on a given rootvg. The administrator can boot from either instance of BOS by specifying the respective BLV as an argument to the bootlist command or using system firmware boot operations. Two bootable instances of BOS can be simultaneously maintained. The instance of BOS associated with the booted BLV is referred to as the active BOS. The instance of BOS associated with the BLV that has not been booted is referred to as the standby BOS. Currently, only two instances of BOS are supported per rootvg.
August 7, 2009 49 AIX Multibos Features - continued The multibos command allows the administrator to access, install maintenance and technology levels for, update, and customize the standby BOS either during setup or in subsequent customization operations. Installing maintenance and technology updates to the standby BOS does not change system files on the active BOS. This allows for concurrent update of the standby BOS, while the active BOS remains in production.
August 7, 2009 50 AIX Multibos Features - continued The multibos command has the ability to copy or share logical volumes and file systems. By default, the BOS file systems (currently /, /usr, /var, and /opt,) and the boot logical volume are copied. The administrator can make copies of additional BOS objects (using the -L flag). All other file systems and logical volumes are shared between instances of BOS. Separate log device logical volumes (for example, those that are not contained within the file system) are not supported for copy and will be shared. The current rootvg must have enough space for each BOS object copy. BOS object copies are placed on the same disk or disks as the original.
August 7, 2009 51 AIX Multibos Features - continued The total number of copied logical volumes cannot exceed 128. The total number of copied logical volumes and shared logical volumes are subject to volume group limits. /etc/multibos contains multibos data and logs. The only supported method of backup and recovery with multibos is mksysb via CD, NIM or tape. If the standby BOS was mounted during the creation of the mksysb, it is restored and synchronized on the first boot from the restored mksysb. However, if the standby BOS wasn’t mounted during the creation of the mksysb backup, the synchronization on reboot will remove the unusable standby BOS.
August 7, 2009 52 AIX Multibos Examples Standby BOS setup operation preview: # multibos -Xsp Set up standby BOS: # multibos -Xs Set up standby BOS with optional image.data file /tmp/image.dat and exclude list /tmp/exclude.lst: # multibos -Xs -i /tmp/image.dat -e \ /tmp/exclude.lst
August 7, 2009 53 AIX Multibos Examples - continued To set up standby BOS and install additional software listed as bundle file /tmp/bundle and located in the images source /images: # multibos -Xs -b /tmp/bundle -l /images To execute a customization operation on standby BOS with the update_all install option: # multibos -Xac -l /images
August 7, 2009 54 AIX Multibos Examples - continued To mount all standby BOS file systems, type: # multibos –Xm To perform a standby BOS remove operation preview: # multibos –RXp To remove standby BOS: # multibos -RX
August 7, 2009 55 AIX Multibos Examples - continued Apply TL6 to the standby BOS. The TL6 lppsource is mounted from our Network Installation Manager (NIM) master. Perform a preview operation and then execute the actual update to the standby instance. Check the log file for any issues: # mount nimsrv:/export/lpp_source/lpp_sourceaix5306 03 /mnt # multibos -Xacp -l /mnt # multibos -Xac -l /mnt
August 7, 2009 56 AIX Multibos Examples - continued Back out of the update and return to the previous TL. Set the bootlist and verify that the BLV is set to the previous BOS instance (hd5): # bootlist -m normal hdisk0 blv=hd5 \ hdisk0 blv=bos_hd5 # bootlist -m normal -o hdisk0 blv=hd5 hdisk0 blv=bos_hd5 Now reboot the system and confirm that it’s running at the previous TL.
August 7, 2009 59 AIX Check Boot Environment After the reboot, confirm the TL level: # oslevel –r Verify which BLV the system booted from with: # bootinfo –v
August 7, 2009 60 Features Compared FeatureHP-UX DRDSolaris Live Upgrade AIX Multibos LicensingN/A Supported platforms PA-RISC IA-64 SPARC x86-32 x86-64 32-bit POWER 64-bit POWER * PowerPC Supported O/SHP-UX 11.23 HP-UX 11.31 Solaris 2.6 Solaris 7 Solaris 8 Solaris 9 Solaris 10 AIX 5L Version 5.3 with the 5300-03 Recommended Maintenance package and later Current productDynRootDisk B.11.xx.A.3.4.y where xx is 23 or 31 Live Upgrade 2.0Part of AIX 6.1 TUINot supportedSupportedNot Supported GUINot supported Not Supported CLISupported
August 7, 2009 61 FeatureHP-UXSolarisAIX Multibos Add mirror disk to a clone Supported directly via command: drd clone –x mirror_disk= Not supported directly! Supported via SVM, ZFS, and VxVM RAID- 1 setup only N/A Reboot commands drd activate –x reboot=true or Standard Unix commands Never use reboot (1) or halt (1) commands. Instead, “ init 6 ” or shutdown (1 ) bootlist -m normal hdisk0 blv=bos_hd5 shutdown -Fr or reboot -q Automated comparison of primary and alternate boot environments Mostly manual process, based on: dvd mount cmp... diff... lucompare (1) Mostly manual process, based on: multibos –S cmp... diff... Features Compared - continued
August 7, 2009 62 Features Compared - continued FeatureHP-UXSolarisAIX Multibos Mounting inactive images a) “ drd mount ” does not support mounting on different directories b) “ drd mount ” mounts file systems as: /var/opt/drd/mnts/ sysimage_00X a) lumount (1) supports mounting on different directories b) “ lumount ” mounts file systems as: /.alt.configX multibos –S It mounts file systems as /bos_inst/... Change size of any file systems during cloning Not supportedSupportedSupported ** File system split Not supportedSupported *Not supported
August 7, 2009 63 Features Compared - continued FeatureHP-UXSolarisAIX Multibos Simple listing of clone file systems drd mount bdf Supported via lufslist (1) command Not directly supported ** Clone updates (re-sync) Supported via full clone recreation: drd clone –t= -x overwrite=true Supported via command lumake (1) Supported via flag “ - c” * Merge file systems during cloning Not supported yetSupported Not supported
August 7, 2009 64 Features Compared - continued FeatureHP-UXSolaris AIX Multibos Change file system type during cloning Not supportedSupported. For example, SVM to ZFS migration Not supported Supported Volume Manager LVM VxVM Solstice DiskSuite * VxVM ZFS ** AIX LVM Virtualization Support nPar vPar Integrity VM Solaris Zones *** Logical Domain LPAR Dynamic LPAR Live Partition Mobility on POWER6 WPAR Full-disk copy during cloning On Itanium servers, all partitions are created and EFI and HPUX are copied. This release of DRD does not copy the HPSP Supported Not supported
August 7, 2009 65 Features Compared - continued FeatureHP-UXSolarisAIX Multibos Multiple target disks for cloning Not supportedSupportedNot supported Dry-run (preview) cloning Supported Swap sharedPrimary swap is not shared, secondary swap can be shared Yes, by default On-line cloningYesSun recommends to halt all zones during lucreate or lumount operations! That means, the Solaris zones cloning is not truly an on-line process Yes
August 7, 2009 66 Features Compared - continued FeatureHP-UXSolarisAIX Multibos Exclude files from cloning Not supported yet *Supported **Supported ***** Include files during cloning Not supported yetSupported **Supported ***** Simple method to remove clone Not supported yet ***Supported ****Supported ****** Clone on the same physical disk (multiple BEs on the same disk) Not supportedSupported