Presentation is loading. Please wait.

Presentation is loading. Please wait.

HP-UX LVM, OnlineJFS and Oracle ASM Basics*

Similar presentations

Presentation on theme: "HP-UX LVM, OnlineJFS and Oracle ASM Basics*"— Presentation transcript:

1 HP-UX LVM, OnlineJFS and Oracle ASM Basics*
My humble attempt to summarise some best-known features. I used many resources and valuable comments by others. I cannot claim big credits for this presentation. Dusan Baljevic Sydney, Australia

2 Disk Partitioning Concepts – HP-UX
Partitions can be configured using: The Whole Disk Approach (no volume manager). Logical Volume Manager (LVM). Veritas Volume Manager (VxVM).

3 Whole Disk Partitioning Concepts
The whole disk approach supports partitioning a disk in five different ways: Swap File System Swap Space Raw Space Boot Area

4 Whole Disk Partitioning - Pros
Simple to use, almost no Unix knowledge required. No licensing. Supports any type of physical volume.

5 Whole Disk Partitioning - Cons
Partitions cannot span multiple disks. Each disk can contain at most one file system partition. Partitions cannot be easily extended.

6 Logical Volume Manager Concepts
HP-UX LVM is much more flexible than the whole disk approach. Introduced in HP-UX 9.0: Partitions/volumes can span multiple disks Multiple partitions/volumes may be configured on a single disk Partitions/volumes can be easily extended and reduced as needs change LVM is included in all current versions of HP-UX BaseLVM is included with the operating system LVM MirrorDisk/UX is available for an extra charge or in higher OE releases Physical Volume Physical Volume Volume Group Logical Volumes

7 HP-UX File Systems and Volume Managers
Logical Volume Manager (LVM) Base File System (VxFS-lite) MirrorDiskUX (MDUX) Cluster File System (CFS) File System Solutions Included in Base OE Included in VSE, HA, and DCOE Included in VSE, HA, and DCOE MirrorDiskUX (MDUX) Volume Management Solutions Veritas Volume Manager (VxVM) Cluster Volume Manager (CVM)

8 HP-UX with Symantec Releases
HP-UX OS Version HP-UX 11.23 HP-UX 11.31 Symantec Release Version 3.5 4.1 5.0 5.0.1 Sep 2009 Base Products  Base-VxFS In BaseOS Base-VxFS-50 Base-VxFS-5.0 Base-VxVM Base-VxVM-50 Base-VxVM-5.0 Standalone Products Online-JFS B3929DA B3929EA B3929FA B3929FB VxVM-FULL B9116AA B9116BA B9116CA B9116CB

9 HP-UX 11i v3 (L2 Layout Limits)
Update 3 enhancement Scalability: LVM Supported maximums HP-UX 11i v1/v2 HP-UX 11i v3 (L2 Layout Limits) LVM Current L1 Limits On Disk L2 Limits Code L2 Limits Arch Impl Tested Max LV size (64TB) 16TB 284 16EB 264 256TB Max PV size 241 (2TB) 16TB Max Volume Groups (VG) 256 N/A 224-1 211 2304* Max Logical Volumes (LV) 255 232-1 211-1 2047 Max Physical Volumes (PV) 232 -1 Max number of mirrors 3 254 7 6 Max Nodes for mirroring 2 16 Max number of Extents/VG 216 (64K) 225 Max extent size (range) 1MB – 256MB 1MB – 4TB 1MB–4TB 1MB–4GB 1MB –256MB Max stripe width 28 way (255) way 511 Max Volume Groups (VG): Additive of L1 and L2 limits i.e = 2304

10 LVM and VxVM Features SAN Boot Support Y Y3 RAID0 RAID1 MD/UX
HP-UX 11.31 LVM, MD/UX VxVM 4.1/5.0 1 SAN Boot Support Y Y3 RAID0 RAID1 MD/UX FULL license4 Mirrored Stripes (RAID1+0) FULL license Striped Mirrors (RAID 0+1) Max Volume Size 16EB (256TB)2 32TB Max Vol/Volume Group 2048 (511) 2 Unlimited Max Volume Groups 2304 (768) 2 Rootability Online Volume Reconfiguration SLVM FULL license5 + CVM5 Fast Resync FULL license + VVR Cluster Aware Y(SLVM) Y(CVM) Static Path Failover (A/P) Dynamic Path Load balancing (A/A) # Mirrors 6 32 Dynamic Root Disk (DRD) HP Storage Device Support ALL Limited6 3rd Party Storage Support 1 – VxVM 4.1 available with 11.31, VxVM 5.0 on VxVM 5.0 only 6 – Validate device in question with HP or Symantec qualification list: available H2CY Root Mirroring part of BASE 2 – supported limit 5 – For all features

11 LVM and VxVM Features - continued
HP-UX 11.31 LVM, MD/UX VxVM 4.1/5.0 Instant Snapshots N FULL license Portable Data Containers FULL license1 Multi-volume File System support Dynamic Storage Tiering support Dynamic LUN expansion Y Extended Campus, Metrocluster, and Continental cluster integration Disk group configuration from any node Number Of Nodes 16 4 – VxVM & CVM Base 16/8 – FULL License VxVM/CVM Storage Distance Support (standard/Metrocluster) 100 km / 300 km Rolling Upgrade Y3 RAID 5 FULL License Cluster lock support Y (disk, lun) Hot Relocation Heterogeneous platforms Same commands on AIX, Linux AIX, Solaris, Linux, Windows 1 – VxVM 5.0 Only 3 – As long as no functional or protocol changes (y within 3.x, n between 3.x = 4.x) 2 – AIX and Solaris have similar cluster solution 4 – Not supported for all configurations

12 Scalability: Expanded file system
Supported maximums N/A = Not architecturally limited File systems HP-UX 11i v1 HP-UX 11i v2 HP-UX 11i v3 VxFS Number of files N/A VxFS5.0 - File size 16TB VxFS File system size 50TB VxFS # Access Control List entries 1024 VxFS Number of files VxFS File size 2TB VxFS File system size 32TB VxFS number of files VxFS File size VxFS File system size VxVM volume 32 TB VxVM volumes 2^23 VxVM mirrors 32

13 Disk Space Management Tool Comparison
Whole Disk LVM VxVM 10.x,11.x,11i 11i Yes sam, smh vea Similar No Yes* Active/Passive Active/Active* HP-UX versions supported Boot disk support? GUI configuration tool available? Available on other platforms? Can partitions span disks? Online resizing supported? Online backups supported? Striping (RAID 0) supported? Mirroring (RAID 1) supported? RAID 0+1 and 1+0 supported? RAID 5 supported? Dynamic relayout support? Dynamic multi-pathing support? * Features indicated by an asterisk may require an extra license

14 Benefits of HP-UX Native Multipathing
Zero-Config. No initial setup or configuration needed. No license or add-on software installation needed. Free. Comes bundled with Core HP-UX. Is not an add-on product. No additional license fee for Native MP. Transport Aware Failover/Failback. Fibre Channel RSCN based failover/failback. Detection and failover of all paths leading to affected component (hba, target port, switch, inter-switch link). Fully parallel path failover/failback and path monitoring. Built into Storage Stack. More responsive and better performing. Exploits both server platform and storage characteristics.

15 Benefits of HP-UX Native Multipathing - continued
Tested as a core part of HP-UX. Engineered for scalability, reliability, performance, serviceability. Time to Market support for new technologies. Being a core part of HP-UX, new technologies are supported when released (ex : HPVM, VPARs, VSE, new transports like SAS, storage management products, etc). Simplifies Storage Management. Eliminates an entire layer in the IO Stack (add-on MP product) and reduces amount of management overhead for sysadmin. No more need to install, tune, update, monitor add-on MP product. Ideal for Oracle ASM. Oracle ASM provides its own volume management capability and Native MP is a good fit in that environment since the customer does not need to install VxVM solely for DMP capabilities.

16 HP-UX Native Multipathing vs. Classic MP
Classic MP Product (DMP) Add-on product layered above IO stack. Agnostic of low level SAN and SCSI events. Unaware of transport specific failures (HBA, switch, switch port offline). Discovers failures on IO errors. Needs to issue test IO on IO error. Path Failover initiated following IO error and Test IO failure. Each such path failure detected individually on IO error on each path. Need to ping each failed path to detect recovery. HP-UX 11iv3 Native MP Not an add-on product. Built into the IO stack and HBA drivers are multi-path aware. Aware of SAN and SCSI events and takes advantage of those. Transport Aware. (Knows scope of outage such as HBA, switch, switch port, tgt port). Can failover at the scope of outage. (All paths under HBA, All paths thru the switch, all paths thru the switch port, all paths to the tgt port). SAN Fabric Events signal recovery. Pinging is avoided. Fully parallel failover/failback.

17 Product-use application
Volume Management Logical Volume Manager (LVM) If you need: VxVM If you need: No-cost solution Basic Volume Management-default Operating Environment install Simple operation SAN boot Disk-mirroring capability with MirrorDisk/UX Advanced Volume Management solution Instant Snapshots Cross-platform data sharing QoSS support Raid 5 Hot Relocate/Unrelocate CFS support

18 Product-use application
File Systems VxFS Lite If you need: VxFS Full (OnlineJFS) If you need: Cluster File System (CFS) If you need: * Available 10/ Basic file system functionality A no cost solution Simple operation Fast file system recovery Direct I/O performance* Online defragmentation Online log and file system resizing Oracle Database Management Concurrent I/O performance* File change log Storage Checkpoints Functionality for a clustered environment Cluster Volume Manager Dynamic Storage Tiers Flashsnap technology

19 Oracle ASM vs. Classical Storage Layers
Bill Bridge the original architect of Automatic Storage Management. In the Oracle Press title, Oracle ASM, Bill provides a forward where he discusses the problems with using vendor specific OS file systems to manage Oracle datafile placement: for arch logs & backups, OS vendors don't provide shared disk file system  2. logical volume managers hide the location of files making it difficult to manage disk I/O and provide good stats  3. existing LVMs don't work well with over 100 disks 4. OS's and Oracle don't handle databases well that have 1000's of datafiles 5. naming becomes difficult with large number of datafiles 6. features, and filesystem limitations vary across different OSs 7. users at the OS level can touch Oracle files with standard utilities, without Oracle knowing So, he set out to solve these problems by building Oracle's own filesystem. His intention was to provide these features: 1. tightly integrated with Oracle, and work with clusters (then parallel server) 2. new storage automatically used, managed as disk unit or disk group  3. thousands of disks supported  4. files won't have names, and will be hidden from the OS

20 Oracle Traditional Options - Raw versus Cooked and With or Without an LVM
Slide courtesy of April 6, 2017

21 Disk Groups in the ASM Architecture

22 Oracle ASM Basics An ASM file system layer is implicitly created within a disk group. ASM provides a vertical integration of the file system and volume manager for Oracle database files. This file system is transparent to users and only accessible through ASM instance, interfacing databases, and ASM’s utilities. For example, database backups of ASM files can be performed only with RMAN. Does not completely bypass the O/S I/O stack. It uses the asyncdsk driver in order to perform asynchronous I/O. Without it, the logwriter and dbwriters would be doing direct, synchronous I/O and would not be able to perform acceptably in high IO workloads. The I/O requests also have to pass though the SCSI and FC layers.

23 Oracle ASM Basics - continued
The functionality of an ASM instance can be summarized as follows: Manages groups of disks, called disk groups. Protects the data within a disk group. * Provides near-optimal I/O balancing without any manual tuning. Enables the user to manage database objects such as tablespaces without needing to specify and track filenames. Supports large files.  * Choices for disk group redundancy include: External: defers to hardware protection Normal: 2-way mirroring High: 3-way mirroring In addition, ASM failure groups can be created, providing a set of disks sharing a common resource whose failure needs to be tolerated. Redundant copies of an extent are stored in separate failure groups. ASM failure groups can be assigned by DBAs or automatically by ASM.

24 Oracle ASM Basics - continued
One can use Oracle Enterprise Manager (EM) or the Database Configuration Assistant (DBCA) for a GUI interface to Automatic Storage Management that replaces the use of SQL or SQL*Plus for configuring and altering disk groups and their metadata. DBCA eases the configuring and creation of the database. EM provides an integrated approach for managing both the ASM instance and database instance.

25 Oracle ASM Limits 63 disk groups in a storage system.
10,000 ASM disks in a storage system. 4 Petabyte maximum storage for each ASM disk. 40 Exabyte maximum storage for each storage system. 140 Petabyte in external redundancy (no ASM mirroring). 42 Petabyte in normal redundancy (2-way ASM mirroring). 15 Petabyte in high-redundancy (3-way ASM mirroring). NOTE: Oracle database limits file sizes to 128 TB with big file tablespaces and 32k block size, which is considerably lower than the ASM limits.

26 Oracle ASM Limits ASM disk group that is implemented with External Redundancy has a maximum file size of 35 TB. ASM Disk Group that is implemented with Normal Redundancy has a maximum file size of 5.8 TB. ASM Disk Group that is implemented with High Redundancy has a maximum file size of 3.9 TB. 1 million files for each disk group. 2.4 Terabyte maximum storage for each file. NOTE: Oracle database limits file sizes to 128 TB with big file tablespaces and 32k block size, which is considerably lower than the ASM limits.

27 File Types Supported by Automatic Storage Management
File Supported Default Templates Control files yes CONTROLFILE Datafiles yes DATAFILE Redo log files yes ONLINELOG Archive log files yes ARCHIVELOG Trace files no N/A Temporary files yes TEMPFILE Datafile backup pieces yes BACKUPSET Datafile incremental backup pieces yes BACKUPSET Archive log backup piece yes BACKUPSET Datafile copy yes DATAFILE Persistent initialization parameter file (SPFILE) yes PARAMETERFILE Disaster recovery configurations yes DATAGUARDCONFIG Flashback logs yes FLASHBACK Change tracking file yes CHANGETRACKING Data Pump dumpset yes DUMPSET Automatically generated control file backup yes AUTOBACKUP Cross-platform transportable datafiles yes XTRANSPORT O/S files no N/A April 6, 2017

28 Oracle ASM - Pros ASM provides some file system and volume management capabilities for Oracle database files only. These include DB control files, redo logs, archived redo logs, data files, spfiles and Oracle Recovery Manager (RMAN) backup files (see previous slide). File-level striping/mirroring. Ease of manageability. Instead of running LVM software, run an ASM instance, a new type of "instance" that largely consists of processes and memory and stores its information in the ASM disks it is managing. Attempts to identify the configuration errors. * * 1. A single disk with different mount points is presented to an ASM instance. This can be caused by multiple paths to a single disk. In this case, if the disk in question is part of a disk group, disk group mount fails. If the disk is being added to a disk group with ADD DISK or CREATE DISKGROUP, the command fails. To correct the error, restrict the disk string so that it does not include multiple paths to the same disk. 2. Multiple ASM disks, with the same ASM label, passed to separate ASM instances as the same disk. In this case, disk group mount fails. 3. Disks that were not intended to be ASM disks are passed to an ASM instance by the discovery function. ASM does not overwrite a disk if it recognizes the header as that of an Oracle object.

29 Oracle ASM - Pros Gives Oracle Corporation control over the storage system, which makes them happy, so they promote it heavily. * No large Unix-level administration needed. Provides a single point of support (Oracle) so there is no “finger-pointing”. Provides easy management of block devices (raw partitions). Automatically moves hot blocks to the outside of the disk. Vendor and operating system neutral. * Oracle Automatic Storage Management (ASM) was introduced with Oracle Database 10g. Free, and widely adopted.

30 Oracle ASM - Pros Included in the Oracle license so no additional cost for the software or its support. Supports very large disk groups and datafiles. Database File System with performance of RAW I/O. Supports clustering (RAC) and single instance. Automatic data distribution. Memory requirements for an ASM instance are small. 100 MB of RAM is typically all that is required to run an ASM instance on a production server.

31 Oracle ASM - Pros On-line add/drop/resize disk with minimum data relocation. Automatic file management. * Flexible mirror protection. Inode locks not applicable to ASM. Ability to grow diskgroup capacity on the fly. Supports multiple database instances running on a single host, and does not have its own data dictionary. * ASM uses system generated filenames, otherwise referred to as fully qualified filenames for each file that is under ASM control. This filename gets generated automatically at file creation time. These are also referred to as Oracle Managed Files and will be automatically deleted when it is no longer needed.

32 Oracle ASM – Pros (Oracle 11gr)
Fast mirror resynchronization. * Preferred mirror read in a cluster. ** Support for large ALU. *** Variable size extents. **** Rolling upgrade and patching. Table level migration wizard in EM. New ASMCMD commands. New SYSADM privilege – separate from the SYSDNA privilege. More flexible FORCE options to MOUNT or DROP disk groups. * Benefits: Fast recovery from transient failures Enables pro-active maintenance ** Allows local mirror read operations and eliminates network latencies in extended clusters *** ALUs can be set to 1/2/4/8/16/32/64 MB at disk group creation time. Higher performance for large seq I/O (DW) Better leverage of Hardware RAID read-ahead Set Oracle MAXIO = AU size **** Extent size = AU size up to 20,000 extents Extent size = 8*AU up to 40,000 extents Extent size = 64*AU beyond 40,001 extents Striping Coarse Stripe size always = one AU Fine Stripe size always = 128 KB

33 Oracle ASM - Cons ASM cannot be used for Oracle executables and non-database files. ASM files can only be managed through an Oracle application such as RMAN. This can be a weakness if you prefer third-party backup software or simple backup scripts. Cannot store CRS files or database software. Cannot manage ASM through standard Unix tools. Potentially disrupts the balance of power between the Unix Systems Administration groups, and the Database/DBA groups. Traditionally the former group manages disks, hardware, and the operating system level, leaving the DBAs to coordinate with them for new resources. This would change that balance, which could cause some resistance. * * ASM enable a DBA to manage storage directly – DBA now has to consider: How do I deal with disk failures? Data resilience How do I back up storage? Data protection How do I make the best use of the raw storage? Storage Utilization How do I make it run fast enough? Performance

34 Oracle ASM - Cons ASM does not have multi-pathing capability. It assumes the underlying O/S will provide this functionality. In HP-UX, multi-pathing is provided by a Volume Manager feature such as PVLinks in the HP-UX Logical Volume Manager (LVM), native multipathing in HP-UX 11.31, or DMP in Veritas Volume Manager from Symantec (VxVM), or by other third-party software such as Securepath or Powerpath. ASM is still in the enterprise computing, relatively new. There are a number of vendors whose core business has been in the logical volume manager/file system space for years. Often, maturity matters a lot when it comes to software systems, reliability, and proven success rates. New technology to get familiar with.

35 Oracle ASM - Cons Automatic Storage Management load balances file activity by uniformly distributing file extents across all disks in a disk group. For this technique to be effective it is important that the disks used in a disk group be of similar performance characteristics. There may be situations where it is acceptable to temporarily have disks of different sizes and performance co-existing in a disk group (for example, when migrating from an old set of disks to a new set of disks). The new disks would be added and the old disks dropped. As the old disks are dropped, their storage is migrated to the new disks while the disk group in online.

36 Oracle ASM - Cons There is no shared awareness of LUN use between ASM and LVM or VxVM. It means that the system administrator must be careful not to accidentally allocate a LUN already allocated for LVM or VxVM use to ASM use (or vice-versa). ASM is not an enterprise-class file system. ASM is a proprietary solution. The customer is dependant on the reliability of the new ASM code stack. * Does not offer network monitoring. Be careful about ASM hidden parameters. * An example of a long-running bug: Oracle ASM holds disk after removing it (Metalink ). Workaround is to restart instances, or if using Oracle version apply patches and – they are required for ServiceGuard support too.

37 Oracle ASM - Cons Not for high I/O environments (that is what some tests claim). Everything is single threaded through one process at a very low level. * If one uses Oracle ASM and CRS, they will still require a 3rd party clustering solution to support the non-Oracle data. They will then have to manage multiple clustering solutions. * I found this claim somewhere but cannot find its reference now. This needs to be verified!

38 Oracle ASM - Cons DBAs must still watch and then perform the task of adding and removing disks to an ASM disk group when needed. This leads back to the problem of DBAs under allocating, and worse yet, over allocating disk storage, just to be safe, which recreates the problem of wasted space and leads to higher than needed storage costs. This is where thin provisioning comes into play. Thin provisioning will automatically allocate on a just-enough and just-in-time basis which relieves the DBA from both having to watch and then add or remove disk to a disk group. Oracle's ASM and thin provisioning could be combined to offer a complete, end-to-end, storage solution. Oracle's ASM feature would create, allocate, place, and rebalance data files for performance and thin provisioning would dedicate disk space on the fly and only when needed.

39 Oracle ASM - Cons Configuration details and performance metrics are exposed via V$ views. Other possibilities are the command line interface, asmcmd and the graphical interface of OEM. Metadata are however partially hidden to the end user. That is the mapping between physical storage, ASM allocation units, and database files is not completely exposed via V$ views. It was found that is possible to query such information via undocumented X$ tables. For example, it is possible to determine the exact physical location on disk of each extent (or mirror copies of extents) for each file allocated on ASM (and if needed access the data directly via the O/S). This can be used by Oracle DBAs wanting to extend their knowledge of the inner workings of the ASM or wanting to diagnose hotspots and ASM rebalancing issues.

40 What about Non-Database Files?
Since ASM supports only database data files and log files, the following storage management methods are required for non-database files if ASM is used: A) A local file system or Cluster File System for Oracle Clusterware binaries and configuration files and RAC binaries and configuration files. B) Shared storage for Oracle Clusterware data: voting disk and OCR files. This storage has to be configured either as shared raw devices, shared raw volumes, or files in the Cluster File System. Oracle Clusterware needs to be up and running before the ASM daemon can start. Therefore, ASM cannot be used for Oracle Clusterware data.

41 Some of the Oracle Plans and Efforts *
Elimination of Symantec by destroying Veritas Cluster as a viable product (replaced with CRS), and the same for Veritas File System (replaced by ASM). Elimination of Sun by adopting Linux as a low-cost alternative to Solaris (Sun has now been acquired by Oracle who has been primarily responsible for its downfall). Attempted elimination of Red Hat by “migrating” Red Hat Linux under Oracle Enterprise Linux (limited success so far). Challenge to EMC and NetApp in the area of storage using the Oracle Exadata Storage (version 1 was with HP and just-announced version 2 on Intel X86 and Linux). Changes in licensing structure. Acquisition of Sun Microsystems. This slide simply points out the facts that Oracle is trying to “run the show alone as much as possible”.

42 Capability compare Business Benefit Vendor Product(s) HP Oracle
Business Benefit Vendor Product(s) Oracle executable and non-database files Multi-pathing capability Strong I/O fencing Single-instance Database object storage RAC Database object storage Near raw database performance HP OnlineJFS+LVM/MirrorDisk/UX (Single Instance only) Yes no OnlineJFS+VxVM/ODM Storage Management for Oracle (Single Instance only) Cluster file system(both RAC & Single Instance) y No file system – raw volumes No YES – it is raw volume (RAC ONLY) Oracle Automatic Storage Management (RAC & SI) No* (limited)

43 Capability compare - continued
Business Benefit Vendor Product(s) Ability to use standard UNIX commands to move or copy data Proven, time-tested quality and reliability All disaster-recovery needs can be met Ease of management (e.g., backup and database maintenance) HP OnlineJFS+LVM/MirrorDisk/UX (Single Instance only) Yes y OnlineJFS+VxVM/ODM (Storage Management for Oracle)(Single Instance only) Cluster file system(both RAC & Single Instance) No file system – raw volumes n (RAC ONLY) Oracle Automatic Storage Management (RAC & SI) No

44 Oracle ASM versus HP-UX SLVM **
• ASM lacks Multipathing: The ASM-over-SLVM configuration provides multipathing for ASM disk groups (using LVM PV Links or storage based multipathing). This is not an issue on HP-UX 11i v3 (native multipathing). ASM-over-SLVM enables the HP-UX devices used for disk group members to have the same names on all nodes, easing ASM configuration. Generally, when using raw disk devices with ASM, the device files are created under /dev/oracle, so they can be the same on all nodes. ASM-over-SLVM protects ASM data against inadvertent overwrites from nodes inside/outside the cluster. If the ASM disk group members are raw disks, there is no protection currently against these disks being setup in LVM or VxVM volume/disk groups. * There are some disadvantages too. ** * LVM pvcreate(1M) recognises the ASM disks and does not destroy them unless the “-f” (force) flag is used. ** Disadvantages: - Additional configuration and management tasks introduced by the extra layer of volume management (volume groups, logical volumes, and physical volumes);  - Performance impact of extra layer of volume management.

45 Oracle Discontinues Raw/Block Device Support
In release 11.2, the Oracle Installer and DBCA (Database Configuration Assistant) will no longer support raw/block devices for database files. In addition, there will no longer be raw/block device support for storing the OCR and Voting Disks for new installs. Customers who create a new 11.2 database will need to store their database files in either ASM, a file system, or on an NFS filer. RAC database files must be stored on ASM, a certified clustered file system, or a certifed NFS filer. As stated in Metalink note , Oracle plans to fully desupport RAW/Block device storage effective with the next major release following At this time, customers will need to migrate any data files stored on RAW/Block devices to ASM, a cluster file system, or NFS. Thus, we recommend new databases not be deployed on RAW/Block devices. For more information on migrating from RAW to ASM, please see :

46 Oracle ASM on HP-UX – Some Experiences
It was found Oracle ASM was a good choice to simplify Oracle related storage and volume management However do not use ASM redundancy features The HP-UX 11v3 MSS (Mass Storage Stack) and Oracle ASM were a “killer combo”. ASM works very well for Oracle data management, but a severe performance penalty gets incurred if the ASM redundancy features are used on top of the storage array redundancy features.

47 One of HP-UX Solutions HP-UX file systems can provide read/write performance 95-98% as fast as a raw-device setup.* OJFS 5.0.1: performance impact virtually eliminated. The manageability and reduced administration from a file system, with near-raw performance. *Currently, HP offers advanced file system performance from its Storage Management Suite bundles—which include Oracle’s Database Manager (ODM) After October, 2009 Direct I/O capability will be enabled in the base file system and Concurrent I/O will be enable in OnlineJFS (which is included in all but the Base OE). * Currently this is offered through the purchase of a Storage Management Suite bundle

48 HP-UX 11iv3 Mass Storage Stack
Transparent native multipathing to LUNs is useful and necessary – decreases work and rework. Load balancing increased performance “least command load” policy provided the best performance for this BI workload “round robin” policy, the default, provided slightly less performance Persistent LUN bindings and LUN DSF’s reduce design and maintenance efforts significantly for mid-range and high-end BI implementations containing large storage subsystems. Storage management is extremely important fro BI customer due to the large volume of storage consumed by the BI workload. MSS greatly simplifies this task. Large Enterprise Example: Due to the work involved with migrating FC HBA to new cabinet in the SD they declined to do so and missed a good opportunity to balance the IO across cabinets and achieve better IO distribution and performance. MSS and v3 would have made the task much easier to do, and more likely for the customer to do.

49 Some Best Practices for Oracle ASM with HP StorageWorks
As a result of testing, a set of best practices for using Oracle ASM with HP StorageWorks on HP-UX servers is presented: • Configure ASM disk groups to use external redundancy. • When building a disk group or adding to an existing one, use disks of similar capacity and performance characteristics in the same disk group. • To leverage I/O distribution across as many resources as possible, it is best to present more than one LUN to a disk group (allowing ASM to do the striping). • Use HP Secure Path with ASM on HP-UX or MSS (Mass Storage Stack) on HP-UX because it complements the high availability and performance of the entire stack.

50 Some Best Practices for Oracle ASM with HP StorageWorks - continued
• Each device (LUN) should be managed either by Oracle ASM or by HP-UX LVM, but not both. • Care should be taken not to attempt inadvertently to manage an ASM disk by a traditional volume manager or vice versa. • Configure async I/O (Please consult the Oracle Administration Guide documentation). • Use "insf" instead of "insf -e" to create the devices associated with new hardware because "insf -e" will reset the ownership to user "bin“.

Download ppt "HP-UX LVM, OnlineJFS and Oracle ASM Basics*"

Similar presentations

Ads by Google