Download presentation
Presentation is loading. Please wait.
1
ASM Concepts and Architecture
2
Oracle Database 10g: ASM Overview Seminar 1-2
Objectives After completing this lesson, you should be able to: Identify options for shared storage List the benefits of ASM Describe the ASM background processes Describe the ASM architecture Oracle Database 10g: ASM Overview Seminar 1-2
3
Shared Storage Technologies
Storage is a critical component of grid computing. Fundamental sharing storage New technology trends Supported shared storage for Oracle grids: Network Attached Storage Storage Area Network Supported file storage for Oracle grids: Raw volumes Cluster file system ASM Shared Storage Technologies Storage is a critical component of any grid solution. Traditionally, storage has been directly attached to each individual server (DAS). Over the past few years, more flexible storage, which is accessible over storage area networks or regular Ethernet networks, has become popular. These new storage options enable multiple servers to access the same set of disks, simplifying provisioning of storage in any distributed environment. Storage Area Network (SAN) represents the evolution of data storage technology to this point. Traditionally, on client server systems, data used to be stored on devices either inside or directly attached to the server. Next in the evolutionary scale came the Network Attached Storage (NAS) that took storage devices away from the server and connected them directly to the network. SANs take the principle a step further by allowing storage devices to exist on their own separate networks and communicate directly with each other over very fast media. Users can gain access to these storage devices through server systems that are connected to both the local area network (LAN) and SAN. The choice of file system is critical for RAC deployment. Traditional file systems do not support simultaneous mounting by more than one system. Therefore, you must store files in either raw volumes without any file system, or on a file system that supports concurrent access by multiple systems. Oracle Database 10g: ASM Overview Seminar 1-3
4
Options for Shared Storage
Cluster File Systems Simpler management Use of OMF with RAC Single Oracle software installation Autoextend Uniform accessibility to archive logs in case of physical node failure. Raw devices: Performance Use when CFS is not available Cannot be used for archivelog files Raw or CFS? You can either use a cluster file system or place files on raw devices. Cluster file systems provide the following advantages: Greatly simplified installation and administration of RAC Use of Oracle Managed Files with RAC Single Oracle software installation Autoextend enabled on Oracle data files Uniform accessibility to archive logs in case of physical node failure Raw devices implications: Raw devices are always used when CFS is not available or not supported by Oracle. Raw devices offer best performance without any intermediate layer between Oracle and the disk. Autoextend fails on raw devices if the space is exhausted. ASM, Logical Storage Managers, or Logical Volume Managers can ease the work with raw devices. You can also add space to a raw device online, or create raw device names that make the usage of this device clear to the system administrators. Oracle Database 10g: ASM Overview Seminar 1-4
5
Oracle Cluster File System
Is a shared disk cluster file system for Linux and Windows Improves management of data for RAC by eliminating the need to manage raw devices Is an Open Source, free solution for Cluster File Systems Can be downloaded from Oracle Technology Network: (Linux) (Windows) Oracle Cluster File System Oracle Cluster File System (OCFS) is a shared file system designed specifically for Oracle Real Application Clusters. OCFS eliminates the requirement that Oracle database files be linked to logical drives and enables all nodes to share a single Oracle Home (on Windows 2000 and 2003 only), instead of requiring each node to have its own local copy. OCFS volumes can span one shared disk or multiple shared disks for redundancy and performance enhancements. The following are the list of files that can be placed on the OCFS: Oracle software installation: Currently, this configuration is supported only on Windows 2000 and The next major version will provide support for Oracle Home on Linux as well. Oracle files (control files, data files, redo logs, bfiles, and so on) Shared configuration files (spfile) Files created by Oracle during run time Voting and OCR files OCFS is free for developers and customers. The source code is provided under the General Public License (GPL) on Linux. It can be downloaded from the Oracle Technology Network (OTN) Web site. Note: From OTN, you can specifically download OCFS for Linux. However, when you download the database software for Windows, OCFS is already included. Oracle Database 10g: ASM Overview Seminar 1-5
6
Automatic Storage Management
Is a portable and high-performance cluster file system Manages Oracle database files Spreads the data across disks to balance the load Provides integrated mirroring across disks Solves many storage management challenges Works with Single instance or RAC databases ASM File system Volume manager Operating system Application Database Automatic Storage Management: Review ASM provides a vertical integration of the file system and the volume manager that is specifically built for the Oracle database files. ASM can provide management for single SMP machines, or across multiple nodes of a cluster for Oracle RAC support. ASM distributes I/O load across all available resources to optimize performance while removing the need for manual I/O tuning. ASM helps DBAs to manage a dynamic database environment by allowing them to increase the database size without having to shut down the database to adjust the storage allocation. ASM can maintain redundant copies of data to provide fault tolerance, or it can be built on top of vendor-supplied reliable storage mechanisms. Data management is done by selecting the desired reliability and performance characteristics for classes of data rather than with human interaction on a per-file basis. ASM capabilities save the DBAs’ time by automating manual storage, and thereby increasing their ability to manage larger databases and more of them with increased efficiency. Note: ASM is the strategic and stated direction as to where the Oracle database files should be stored. However, OCFS will continue to be developed and supported for those who are using it. Oracle Database 10g: ASM Overview Seminar 1-6
7
ASM: Key Features and Benefits
Stripes files rather than logical volumes Online disk reconfiguration and dynamic rebalancing Adjustable rebalancing speed Provides redundancy on a file basis Supports only Oracle database files Database cluster file system with performance of raw I/O usable on all storage platforms Automatic database file management No more hot spots: Eliminate manual I/O tuning ASM: Key Features and Benefits ASM divides files into allocation units (AUs) and spreads them for each file evenly across all the disks. ASM uses an index technique to track the placement of each AU. Traditional striping techniques use mathematical functions to stripe complete logical volumes. When your storage capacity changes, ASM does not restripe all of the data, but moves an amount of data proportional to the amount of storage added or removed, to evenly redistribute the files and maintain a balanced I/O load across the disks. This is done while the database is active. You can adjust the speed of a rebalance operation to increase its speed or to lower the impact on the I/O subsystem. ASM includes mirroring protection without the need to purchase a third-party LVM. One unique advantage of ASM is that the mirroring is applied on a file basis rather than on a volume basis. Therefore, the same disk group can contain a combination of files protected by mirroring, or not protected at all. ASM supports data files, log files, control files, archive logs, RMAN backup sets, and other Oracle database file types. ASM supports RAC and eliminates the need for a Cluster LVM or a CFS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Database 10g: ASM Overview Seminar 1-7
8
Nonclustered ASM Architecture
DB instance SID=sales ASM instance SID=asm RBAL DBW0 ASMB ARB0 FG … RBAL GMON ASM disks ASM disks ASM disks ASM disks ASM disks ASM disks Nonclustered ASM Architecture To use ASM, you must start a special instance, called an ASM instance, before you start your database instance. ASM instances do not mount databases; instead they manage the metadata needed to make ASM files available to ordinary database instances. Both ASM instances and database instances have access to some common set of disks called disk groups. Database instances access the contents of ASM files directly, communicating with an ASM instance only to get information about the layout of these files. An ASM instance contains three new types of background processes in addition to many of the common background processes found in an RDBMS instance. The first type is responsible for coordinating rebalance activity for disk groups, and is called RBAL. The second type actually performs the data AU movements. There can be many of these at a time, and they are called ARB0, ARB1, and so on. The third type is responsible for certain disk group–monitoring operations that maintain ASM metadata inside disk groups. The disk group monitor process is called GMON. ASM disk group 1 ASM disk group 2 Oracle Database 10g: ASM Overview Seminar 1-8
9
Clustered ASM Architecture
Node1 Node2 Group Services tom=+ASM1 bob=+ASM1 harry=+ASM1 Group Services tom=+ASM2 bob=+ASM2 harry=+ASM2 DB instance SID=sales1 DB instance SID=sales2 DBW0 ASMB ASMB DBW0 ASM instance SID=+ASM1 ASM instance SID=+ASM2 RBAL FG FG RBAL FG FG ASMB GMON GMON ASMB RBAL RBAL DB instance SID=test1 DBW0 DB instance SID=test2 DBW0 ARB0 ARB0 … … RBAL RBAL ARBA ARBA ASM disks ASM disks ASM disks ASM disks ASM disks ASM disks Clustered ASM Architecture Clustered ASM has an ASM instance on each node of the cluster, which serves the RDBMS instances on that node. Each database instance using ASM has two new background processes called ASMB and RBAL. RBAL performs global opens of the disks in the disk groups. At database instance startup, ASMB connects as a foreground process into the ASM instance. Communication between the database and the ASM instance is performed via this bridge. This includes physical file changes such as data file creation and deletion. Over this connection, periodic messages are exchanged to update statistics and to verify that both instances are healthy. ASM disk group Tom ASM disk group Bob ASM disk group Harry Oracle Database 10g: ASM Overview Seminar 1-9
10
Database Storage Consolidation
Allows shared storage across several databases RAC and single instance databases can use the same ASM instance. Benefits: Simplified and centralized management Higher storage utilization Higher performance Payroll GL Payroll and GL … … … Database Storage Consolidation In Oracle Database 10g, Release 2, Oracle Clusterware does not require an Oracle RAC license. Oracle Clusterware is now available with ASM and single instance Oracle Database 10g, providing support for a shared clustered pool of storage for RAC and single instance Oracle databases. With this, you can optimize your storage utilization by eliminating wasted, overprovisioned storage. This is illustrated in the slide where, instead of having various pools of disks for different databases, you consolidate all that in a single pool shared by all your databases. By optimizing storage utilization, you can reduce the number of LUNs to manage by increasing their sizes, which, in turn, gives you higher storage utilization and higher performance. Note: RAC and single-instance databases could not be managed by the same ASM instance in Oracle Database 10g, Release 1. 10 x 50 GB 10 x 50 GB 10 x 100 GB Oracle Database 10g: ASM Overview Seminar
11
Oracle Database 10g: ASM Overview Seminar 1-11
ASM Storage: Concepts Database ASM disk group Tablespace Data file ASM file Segment ASM disk Extent File system file or raw device Allocation unit (AU) Oracle data block ASM Storage: Concepts ASM does not eliminate any preexisting database functionality. Existing databases are able to operate as they always have. You can create new files as ASM files and leave existing files to be administered in the old way, or you can eventually migrate them to ASM. The diagram in the slide depicts the relationships that exist between the various storage components inside an ASM-enabled Oracle database. The left and middle sections of the diagram show the relationships that exist in previous releases. On the right are the new concepts introduced by ASM. Database files can be stored as ASM files. At the top of the new hierarchy are ASM disk groups. Any single ASM file is contained in only one disk group. However, a disk group may contain files belonging to several databases, and a single database may use storage from multiple disk groups. As you can see, one disk group is made up of multiple ASM disks, and each ASM disk belongs to only one disk group. ASM files are always spread across all the ASM disks in the disk group. ASM disks are partitioned in allocation units (AU) of one megabyte each. An allocation unit is the smallest contiguous disk space that ASM allocates. ASM does not allow an Oracle block to be split across allocation units. Note: This graphic deals with only one type of ASM file: data file. However, ASM can be used to store other database file types. Physical block Oracle Database 10g: ASM Overview Seminar
12
Oracle Database 10g: ASM Overview Seminar 1-12
ASM Disk Groups Are a pool of disks managed as a logical unit ASM partitions total disk space into uniform sized units Spreads each file evenly across all disks Uses coarse- or fine-grain striping on the basis of the file type Administers disk groups, not files ASM instance ASM Disk Groups A disk group is a collection of disks managed as a logical unit. Storage is added and removed from disk groups in units of ASM disks. Every ASM disk has an ASM disk name, which is a name common to all nodes in a cluster. The ASM disk name abstraction is required because different hosts can use different names to refer to the same disk. ASM always spreads files evenly in 1 MB allocation unit (AU) chunks across all the disks in a disk group. This is called coarse striping. That way, ASM eliminates the need for manual disk tuning. However, disks in a disk group should have similar size and performance characteristics to obtain optimal I/O. For most installations, there is only a small number of disk groups. For instance, one disk group for a work area, and one for a recovery area. For files, such as log files that require low latency, ASM provides fine-grained (128 KB) striping. Fine striping stripes each AU. Fine striping breaks up medium-sized I/O operations into multiple, smaller I/O operations that execute in parallel. Although the number of files and disks increase, you have to manage only a constant number of disk groups. From a database perspective, disk groups can be specified as the default location for files created in the database. Note: Each disk group is self-describing, containing its own file directory and disk directory. Disk group Oracle Database 10g: ASM Overview Seminar
13
Oracle Database 10g: ASM Overview Seminar 1-13
Failure Group Controller 1 Controller 2 Controller 3 6 5 4 3 2 1 7 13 1 7 13 1 7 13 1 7 13 1 7 13 1 7 13 1 7 13 1 7 13 1 7 13 Failure group 1 Failure group 2 Failure group 3 Failure Group A failure group is a set of disks, inside a particular disk group, sharing a common resource whose failure needs to be tolerated. An example of a failure group is a string of SCSI disks connected to a common SCSI controller. A failure of the controller leads to all the disks on its SCSI bus becoming unavailable, although each of the individual disks is still functional. What constitutes a failure group is site specific. It is largely based upon failure modes that a site is willing to tolerate. By default, ASM assigns each disk to its own failure group. When creating a disk group or adding a disk to a disk group, administrators may specify their own grouping of disks into failure groups. After failure groups are identified, ASM can optimize file layout to reduce the unavailability of data due to the failure of a shared resource. Disk group A Oracle Database 10g: ASM Overview Seminar
14
Oracle Database 10g: ASM Overview Seminar 1-14
Disk Group Mirroring Mirror at AU level Mix primary and mirror AUs on each disk External redundancy: Defers to hardware mirroring Normal redundancy: Two-way mirroring At least two failure groups High redundancy: Three-way mirroring At least three failure groups Disk Group Mirroring ASM has three disk group types that support different types of mirroring: External redundancy: Do not provide mirroring. Use an external-redundancy disk group if you use hardware mirroring, or if you can tolerate data loss as the result of a disk failure. Failure groups are not used with these types of disk groups. Normal redundancy: Support two-way mirroring High-redundancy: Provide triple mirroring ASM does not mirror disks; rather, it mirrors allocation units. As a result, you need only spare capacity in your disk group. When a disk fails, ASM automatically reconstructs the contents of the failed disk on the surviving disks in the disk group by reading the mirrored contents from the surviving disks. This spreads the I/O hit from a disk failure across several disks. When ASM allocates the primary AU of a file to one disk in a disk group, it allocates a mirror copy of that AU to another disk in the disk group. Primary AUs on a given disk can have their mirror copies on one of several partner disks in the disk group. ASM ensures that a primary AU and its mirror copy never reside in the same failure group. If you define failure groups for your disk group, ASM can tolerate the simultaneous failure of multiple disks in a single failure group. Oracle Database 10g: ASM Overview Seminar
15
Disk Group Dynamic Rebalancing
Automatic online rebalance whenever storage configuration changes Only move data proportional to storage added No need for manual I/O tuning Online migration to new storage Configurable load on system using ASM_POWER_LIMIT Disk Group Dynamic Rebalancing With ASM, the rebalance process is very easy and happens without any intervention from the DBA or system administrator. ASM automatically rebalances a disk group whenever disks are added or dropped. By using index techniques to spread AUs on the available disks, ASM does not need to restripe all of the data. Instead, it needs to only move an amount of data proportional to the amount of storage added or removed to evenly redistribute the files and maintain a balanced I/O load across the disks in a disk group. With the I/O balanced whenever files are allocated and whenever the storage configuration changes, the DBA does not need to search for hot spots in a disk group and manually move data to restore a balanced I/O load. It is more efficient to add or drop multiple disks at the same time so that they are rebalanced as a single operation. This avoids unnecessary movement of data. With this technique, it is easy to achieve online migration of your data. All you need to do is add the new disks in one operation and drop the old ones in one operation. You can control how much of a load the rebalance operation has on the system by setting the ASM_POWER_LIMIT initialization variable. Its range of values is 0 through 11. The lower the number, the lighter the load, whereas a higher setting has more of a load, and finishes sooner. A setting of 0 places rebalance operations on hold. The default value is 1. Oracle Database 10g: ASM Overview Seminar
16
Oracle Database 10g: ASM Overview Seminar 1-16
Summary In this lesson, you should have learned to: Identify options for shared storage List the benefits of ASM Describe the ASM background processes Describe the ASM architecture Oracle Database 10g: ASM Overview Seminar
17
Preparing for ASM
18
Oracle Database 10g: ASM Overview Seminar 1-18
Objectives After completing this lesson, you should be able to: Choose appropriate ASM disks Describe the purpose of ASMLIB Install ASMLIB Oracle Database 10g: ASM Overview Seminar
19
Oracle Database 10g: ASM Overview Seminar 1-19
ASM Disks ASM disks may use one of the following: Local Raw Devices Network Attached Storage Storage Area Networks Oracle Database 10g: ASM Overview Seminar
20
ASM on Local Raw Devices
To use ASM on Local Raw Devices, you need to: Create disk partitions on local disks Map the raw devices to the file names Set permissions properly Oracle Database 10g: ASM Overview Seminar
21
Oracle Database 10g: ASM Overview Seminar 1-21
Configuring Disks Oracle recommends that you create a single whole-disk partition on each disk that you want to use. To ensure that the kernel is aware of the partition changes, run the partprobe command after partitioning all disks. # /sbin/fdisk /dev/sdc # /sbin/fdisk /dev/sdd # /sbin/partprobe Configuring Disks Oracle recommends that you create a single whole-disk partition on each disk that you want to use. Use either fdisk or parted to create a single whole-disk partition on the disk devices that you want to use. /sbin/fdisk /dev/sdc The kernel may be unaware of the partition changes after partitioning the disks. The partprobe command searches for information about partitions and informs the kernel of partition table changes. /sbin/partprobe Oracle Database 10g: ASM Overview Seminar
22
Oracle Database 10g: ASM Overview Seminar 1-22
Disks and Partitions Disks are often partitioned before use. Disks and partitions are named. The name assigned: Depends on the kernel discovery order Can be set by Udev rules Partitions vary in access time depending on physical location. Inside partition Cylinder 0 Disk and Partitions A physical disk is often partitioned before it can be used as a storage device. On larger systems, the partition may take the entire disk. On smaller systems, there are often many partitions on a single disk. Some file systems and LVM tools manage the disk without prior partitioning. Linux assigns names of disks based on the hardware. SCSI drives are assigned /dev/sd* names as they are discovered when the system starts. SCSI::0 is assigned /dev/sda and SCSI::1 is assigned /dev/sdb. IDE disks are assigned /dev/hd* names starting with hda. The assignment of disk names can be seen in the dmesg listing. SATA disks are seen as SCSI devices. Note: Disk partitioning utilities start with cylinder 0, which is the outermost cylinder of the disk, and continue to cylinder n, which is the innermost. The first half of the disk has the highest performance, and the inner half the lowest. This is because of the manner in which the sectors are arranged on the disk platter. The outer cylinder can have a maximum transfer rate that is twice the transfer rate of the inner cylinder. Outside partition Oracle Database 10g: ASM Overview Seminar
23
Oracle Database 10g: ASM Overview Seminar 1-23
Managing Partitions Partition administration involves several operations: Create Resize Copy Remove Various tools can be used: parted fdisk lvm To ensure that the kernel is aware of the partition changes, run the partprobe command after partitioning disks. Managing Partitions On Linux systems used for databases, there can be many physical disks. Each disk can hold one or more partitions. Each IDE disk can hold up to four primary partitions or three primary and up to 63 logical partitions. SCSI disks are limited to 15 partitions per disk. On large systems, the tendency is to place one or two partitions per disk, and then combine the partitions with RAID, ASM, or LVM into larger, more manageable logical volumes called logical units (LUNs). You can use partitioning tools fdisk or parted to create or remove a partition on a physical disk. You can use parted to resize and copy partitions as well. The partitioning tools usually cause an update to the partition information in the kernel. Sometimes the kernel may be unaware of the partition changes after partitioning the disks. The partprobe command searches for information about partitions and informs the kernel of partition table changes: /sbin/partprobe Oracle Database 10g: ASM Overview Seminar
24
Managing Partitions Using fdisk
You can use the basic fdisk commands to: Create new partitions with n Remove partitions with d Quit without writing q Write the partition table and quit w Managing Partitions Using fdisk You can use the partitioning tool, fdisk, to create or remove a partition on a physical disk. There are other fdisk commands described on the man page; in fdisk, the m (menu) command will display a list of commonly used fdisk commands, and x displays the expert commands. Resizing a partition with fdisk is difficult. It requires several steps and another tool, such as dd, to move the data. Create a new partition. Transfer the data to the new partition. Change any file, directory, or link that references the original partition, including /etc/fstab. Any partitioning tool is potentially dangerous to your data. Make backups before changing your partition tables. With fdisk and other partitioning tools, it is important to plan the partition layout, including a strategy for growth. Oracle Database 10g: ASM Overview Seminar
25
Managing Partitions Using parted
You can use the basic parted commands to: Create a new partition table with mklabel Create new partitions with mkpart Create new partitions and file system with mkpartfs Remove partitions with rm Resize partitions with resize Move partitions with move Quit with quit Managing Partitions Using parted You can use the partitioning tool, parted, to create a disk partition table, and to create or remove a partition on a physical disk. Other available parted commands are displayed with the help command, and detailed help on the commands including syntax is available with the help <command> . parted has a man page. You can use parted to resize and copy partitions as well. The parted tool allows the partition to be resized larger and smaller for supported file systems. (ext3 is not supported by parted at this time.) To resize the partition, there must be free space on the disk immediately following the partition; otherwise, the partition has to be moved. When you resize the partition containing a supported file system, the file system is also resized to recognize the available space. parted does not resize a partition with an unsupported file system. Partitions must be unmounted before parted will operate on them. Online partition tools exist that allow partition changes while the file system is mounted. Online partition tools are not supported under Unbreakable Linux. Oracle Database 10g: ASM Overview Seminar
26
Oracle Database 10g: ASM Overview Seminar 1-26
Raw Devices on Linux Raw devices: Are deprecated in 2.6 kernel Are replaced with the O_DIRECT flag access to block devices Are character devices Require modification to Udev rules and permissions Raw Devices on Linux Raw devices are character devices. In the past, only raw devices could bypass the OS buffer cache. However, raw I/O has been deprecated by the Linux community and been replaced with block device I/O with the O_DIRECT flag. Oracle Database 10g, Release 2 bypasses the OS buffer cache by opening block devices with the O_DIRECT flag. Because the 2.6 kernel uses the Udev system, devices are re-created on each reboot. The devices get default permissions that do not match the requirement that the Oracle software owners have owner permissions on the raw device. To use raw devices: Bind the raw device to the block device with the rawbind command: # rawbind /dev/raw/raw1 /dev/sdj Alternatively, add the raw device to the /etc/sysconfig/rawbind file so that the device is bound on every startup: /dev/raw/raw1 /dev/sdj Set permissions in a file in the /etc/udev/permissions.d/50-udev.permissions file to allow the oracle user to own the raw device: # raw devices raw/*:oracle:disk:0660 Oracle Database 10g: ASM Overview Seminar
27
Oracle Database 10g: ASM Overview Seminar 1-27
What Is Udev? Linux Kernel 2.6 includes Udev. The Udev subsystem: Creates devices at reboot time for connected hardware Creates devices for hardware that produce a hot plug event Runs in user space Loads kernel modules as needed Is user configurable What Is Udev? The Udev subsystem is a dynamic method of creating /dev entries (devices) for connected hardware. The Udev subsystem works with hotplug and the sysfs virtual file system. At startup, drivers compiled directly into the kernel create the objects they require in the sysfs file system. For drivers loaded as modules, the objects are created when the module is loaded. When the sysfs file system is mounted at /sys, these objects and the devices attributes are available to Udev. When the modules are loaded, an event is issued to Udev; then the Udev subsystem creates the device name, and assigns a major and minor number to the device. When devices are added after boot time, a hotplug event is generated that invokes Udev to configure the device. The Udev assignments can be viewed in the kernel ring buffer with dmesg. The Udev subsystem is very flexible. It does not require that all the millions of possible devices are created in the /dev directory. The Udev subsystem creates the /dev entries and names as the events are processed. Because Udev can process these events in parallel, the order is not always the same. This behavior means that the device name can change at each startup. Removable devices could have a different device name each time they are inserted. Programs and users expect a device name to point to the same piece of hardware all the time. This issue is handled by the Udev rule files that provide a user configurable method to have persistent naming. Oracle Database 10g: ASM Overview Seminar
28
Oracle Database 10g: ASM Overview Seminar 1-28
Configuring Udev The Udev subsystem reads configuration files to: Load kernel modules (modprobe.conf) Set device names (rules) Set device access permissions (permissions) Will be covered later in the section on SAN Configuring udev The Udev subsystem is not responsible for module loading. Module loading is determined by the modprobe.conf file. You can influence the order in which the devices are named by changing the order of loading of the driver modules. The Udev subsystem does not work with all modules and devices. For example, the loop module that creates the loopback devices is not automatically handled by Udev. You call the losetup utility to create loopback devices. Some modules are not yet fully compatible with Udev. The Udev-compatible modules create the object in sysfs. The Udev subsystem reads the sysfs directory for the device attributes, such as the serial number, label, and bus device number. These attributes may be used to determine the name for the device by setting rules. The permissions for a device default to root:root 0600, if there is not a valid rule or permissions entry in the configuration files. The raw devices used for database files should be owned by the Oracle software owner. You edit the permissions file so that Udev sets the permissions for the raw devices being used for database files at each startup. You also use the Udev permission rules to set the permission for devices used by ASM. The Udev subsystem uses several configuration files. The files listed in parentheses in the slide are the defaults in the Enterprise Linux 4 Update 4 distribution. Oracle Database 10g: ASM Overview Seminar
29
Oracle Database 10g: ASM Overview Seminar 1-29
ASM on NAS To use ASM on NAS, you must: Use a certified NAS storage device Use zero padded files in the NFS mounted directory Map raw devices to the file names Ensure permissions are set properly Oracle Database 10g: ASM Overview Seminar
30
Oracle Database 10g: ASM Overview Seminar 1-30
ASM on NAS Create an exported directory for the disk group files on the NAS device: As root, create mount point on the local system. Add the entry in /etc/fstab. Mount the NFS file system on the local system. Create a directory for the files on the NFS file system using the disk group name as the directory name: # mkdir -p /mnt/oracleasm # mount /mnt/oracleasm # mkdir /mnt/oracleasm/nfsdg Oracle Database 10g: ASM Overview Seminar
31
Oracle Database 10g: ASM Overview Seminar 1-31
ASM on NAS Create as many zero padded files as needed in the directory. # dd if=/dev/zero of=/mnt/oracleasm/nfsdg/disk1 bs=1024k count=1000 # dd if=/dev/zero of=/mnt/oracleasm/nfsdg/disk2 bs=1024k count=1000 # dd if=/dev/zero of=/mnt/oracleasm/nfsdg/disk3 bs=1024k count=1000 # dd if=/dev/zero of=/mnt/oracleasm/nfsdg/disk4 bs=1024k count=1000 Oracle Database 10g: ASM Overview Seminar
32
Oracle Database 10g: ASM Overview Seminar 1-32
ASM on NAS Change the owner of the directory and files just created: Change the permissions of the directory and files just created: Set the ASM Discovery string to an expression matching the file names of the zero padded files. # chown -R oracle:dba /mnt/oracleasm # chmod -R 660 /mnt/oracleasm # asm_diskstring = /mnt/oracleasm/nfsdg/disk* Oracle Database 10g: ASM Overview Seminar
33
Oracle Database 10g: ASM Overview Seminar 1-33
ASM on SAN To use ASM on SAN devices, you: Must set up devices on SAN Need to use UDEV to maintain device name Oracle Database 10g: ASM Overview Seminar
34
Initializing SAN Storage for iSCSI
To initialize the storage, disable or enable iSCSI Target from the management interface Or, execute service iscsi-target restart as root: View the contents of the /etc/ietd.conf file. Edit the initiators.deny and initiators.allow files. ~]# service iscsi-target restart ~]# cat /etc/ietd.conf Target iqn com.oracle.us:cg1.ocr Lun 0 Path=/dev/cg3/ocr,Type=fileio ... ~]# cat /etc/initiators.deny iqn com.oracle.us:cg3.ocr ALL iqn com.oracle.us:cg3.vote ALL ... Initializing the Storage When the disks on the Openfiler server have been configured, stop and restart the iscsi-target service. This can be done with the Openfiler management interface or the command line as shown in the slide. When this has been done, the iSCSI process, ietd, will create a configuration file called /etc/ietd.conf containing identifiers for the volumes that were created previously. Access to the volumes is controlled by the /etc/initiators.deny and /etc/initiators.allow files. By default, access to newly created volumes is denied to everyone. Edit the initiators.deny and initiators.allow files to reflect your security policy. Oracle Database 10g: ASM Overview Seminar
35
Accessing the Shared Storage
Make sure that the iscsi-initiator-tools RPM is loaded: Edit the /etc/iscsi.conf file to add the discovery entry. Make sure that the iscsi service is started at system boot: Start the iscsi service. ~]# rpm -qa|grep iscsi ~]# vi /etc/iscsi.conf DiscoveryAddress=ed-dnfiler06b.us.oracle.com ~]# chkconfig –add iscsi ~]# chkconfig iscsi on Accessing the Shared Storage Any host needing to access the iSCSI devices must have a software- or hardware-based iSCSI initiator. The workshop RAC nodes employ the open source iscsi-initiator-tools RPM: ~]# rpm -qa|grep iscsi iscsi-initiator-utils Make sure that the nodes can access the storage before proceeding. Execute the service iscsi start command to initialize the service. To add the iscsi service to the system startup scripts, execute the following commands: ~]# chkconfig –add iscsi ~]# chkconfig iscsi on Next, start the iscsi service with the following command: ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] ~]# service iscsi start Oracle Database 10g: ASM Overview Seminar
36
Accessing the Shared Storage
Check to see that the volumes are accessible with iscsi-ls and dmesg. ~]# iscsi-ls ************************************************************* SFNet iSCSI Driver Version ...4: (02-May-2006) TARGET NAME : iqn com.oracle.us:cg1.ocr TARGET ALIAS : HOST ID : 24 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : :3260,1 SESSION STATUS : ESTABLISHED AT Thu Nov 23 10:07:20 EST 2006 SESSION ID : ISID 00023d TSIH 600 Accessing the Shared Storage (continued) View the /var/log/messages file or use the dmesg command to view the assigned device nodes for the iSCSI volumes. Dec 10 19:42:58 ed-otraclin11b iscsi: iscsid startup succeeded Dec 10 19:42:58 ed-otraclin11b iscsid[7636]: Connected to Discovery Address Dec 10 19:42:59 ed-otraclin11b kernel: SCSI device sdd: byte hdwr sectors (1074 MB) Dec 10 19:42:58 ed-otraclin11b kernel: Attached scsi disk sdc at scsi10, channel 0, id 0, lun 0 ... Dec 10 19:42:59 ed-otraclin11b kernel: SCSI device sde: byte hdwr sectors (12583 MB) Dec 10 19:42:59 ed-otraclin11b kernel: SCSI device sdf: byte hdwr sectors (1577 MB) Alternatively, you may view the contents of the /sys/block directory. sys]# ls -la /sys/block/sdd drwxr-xr-x 4 root root 0 Dec 10 19:42 . drwxr-xr-x 33 root root 0 Dec 10 19:43 .. -r--r--r root root 4096 Dec 10 19:42 dev Oracle Database 10g: ASM Overview Seminar
37
Partitioning the iSCSI Disk
Use the fdisk utility to create iSCSI slices within the iSCSI volumes. These device names are not persistent across reboots. Partitioning the iSCSI Disk The fdisk utility is used to partition the iSCSI volumes into usable slices. In the example in the slide, the OCR volume is mapped to /sys/block/sdd. The disk is partitioned into two slices, one for OCR and the other for the mirror file. After the volume has been sliced, a relisting of the /sys/block/sdd directory shows this: sys]# ls -la /sys/block/sdd ... -r--r--r root root 4096 Dec 10 19:42 dev lrwxrwxrwx 1 root root 0 Dec 10 19:42 device -> ../../devices/platform/host10/target10:0:0/10:0:0:0 drwxr-xr-x 3 root root 0 Dec 10 19:42 queue -r--r--r root root 4096 Dec 10 19:42 range -r--r--r root root 4096 Dec 10 19:42 removable drwxr-xr-x 2 root root 0 Dec 10 19:43 sdd1 drwxr-xr-x 2 root root 0 Dec 10 19:43 sdd2 -r--r--r root root 4096 Dec 10 19:42 size -r--r--r root root 4096 Dec 10 19:42 stat Note that the two slices now show up as sdd1 and sdd2. There are corresponding device nodes created in /dev also. However, problems arise if the node is rebooted. You find that /dev/sdd no longer contains the OCR slices. The device names are not persistent. Oracle Database 10g: ASM Overview Seminar
38
Oracle Database 10g: ASM Overview Seminar 1-38
Udev Basics Udev simplifies device management for cold and hot plug devices. Udev uses the hot plug events sent by the kernel, whenever a device is added or removed from the system. Details about newly added devices are exported to /sys. Udev manages device entries in /dev by monitoring /sys. Udev is a standard package in Red Hat 4.0. The primary benefit that Udev provides for Oracle RAC environments is persistent: Disk device naming Device ownership and permissions Udev Basics In Linux 2.6, a new feature was introduced to simplify device management and hot plug capabilities. This feature is called Udev and is a standard package in Red Hat 4.0. Udev is a user space utility for dynamic device node creation. A device node is an entry in the /dev directory. The primary benefit that Udev provides for Oracle environments is persistent disk device naming, and persistent device ownership and permissions. This persistence ensures that disk devices used in your RAC environment can be found even after SCSI reconfigurations. Red Hat 4.0 ships with Udev Version .50, which by default manages device nodes for both hot and cold plug devices. Udev uses the hot plug events sent by the kernel, whenever a device is added or removed from the system. The details about the newly added devices are exported by the kernel to the Sysfs file system, which is a new file system in the 2.6 kernel. It is managed by the kernel and exports basic information about the devices currently plugged into your system. Udev manages the /dev directory by monitoring the /sys directory. By leveraging the information in sysfs, Udev can determine the devices that need to have a /dev entry created and the corresponding device names to be used. After Udev is notified that a device is added, it uses the scsi_id call to retrieve and generate a unique SCSI identifier. The scsi_id call queries an SCSI device via the scsi inquiry command and leverages the vital product data (VPD) page 0x80 or 0x83. The output from this query is used to to generate a value that is unique across all SCSI devices. Oracle Database 10g: ASM Overview Seminar
39
Oracle Database 10g: ASM Overview Seminar 1-39
Udev Configuration Udev behavior is controlled by /etc/udev/udev.conf. Important parameters include: udev_root sets the location where Udev creates device nodes. (/dev is the default.) default_mode controls the permissions of the device nodes. default_owner sets the user ID of the files. default_group sets the group ID of the files. udev_rules sets the directory for the Udev rules files. (/etc/udev/udev.rules is the default.) udev_permissions sets the directory for permissions. (/etc/udev/udev.permissions is the default.) Udev Configuration The main Udev configuration file, /etc/udev/udev.conf, controls the directory locations for the Udev permission and rules files, the Udev database, and the default location where the Udev device nodes are created. After Udev is started and the device nodes are created, Udev applies the ownership and permissions of the device node to those defined in the udev.permissions directory. This is done every time that Udev is reloaded. The main Udev configuration file is essentially used to keep file attributes persistent across reboots. This file should contain an entry for the disks that will be used for ASM, OCR, and Voting disks. Either specify individual disks or use wildcards. Oracle Database 10g: ASM Overview Seminar
40
Oracle Database 10g: ASM Overview Seminar 1-40
Udev Rules Parameters Common parameters for NAME, SYMLINK, and PROGRAM: If %n is the kernel number, sda2 would be 2 %k is the kernel name for the device, for example, sda. %M is the kernel major number for the device. %m is the kernel minor number for the device. %b is the bus ID for the device. %p is the path for the device. %c is the string returned by the external program defined by PROGRAM. %s{filename} is the content of a sysfs (/sys) attribute. Udev Rules Parameters The Udev rules file, found in the udev_rules directory, is used to drive the naming scheme for device nodes. Udev passes in a device node and applies the appropriate rule. The output is a meaningful and consistent device name. Every line in the rules files defines how a specific device attribute is mapped to a device file. If all the keys that are specified in a rule match the device that was found, the specified device file is created. The following is an example of the Udev rule used to map disk devices. KERNEL="dm-[0-9]*", PROGRAM="/sbin/devmap_name %M %m", NAME="%k", SYMLINK="%c" The “%M”and “%m” represent the major and minor numbers respectively; “%k” represents the kernel-assigned device name, such as sda. The “%c” stands for the output string to be generated from the specified program. The program will output the persistent name for that device and create a symbolic link in the /dev directory , as indicated in the SYMLINK keyword. Oracle Database 10g: ASM Overview Seminar
41
Multipathing and Device Mapper
Multipathing tools aggregate a device-independent path into a single logical path Multipathing is an important aspect of high-availability configurations. RHEL4 incorporates a tool, called Device Mapper (DM), to manage multipathed devices. The DM is dependent on the following packages: Device-mapper udev device-mapper-multipath The /etc/init.d/multipathd start command initializes the Device Mapper. Multipathing and Device Mapper An I/O path generally consists of an initiator port, fabric port, target port, and logical unit (LUN). Each permutation of this I/O path is considered an independent path. Dynamic multipathing and failover tools aggregate these independent paths into a single logical path. This path virtualization provides I/O load-balancing across the host bus adapters (HBAs), as well as nondisruptive failover on HBA failures. Multipathing is an important aspect of high-availability configurations. Linux 2.6 offers a Path Manager utility called Device Mapper (DM). The DM uses pseudodevices with the “dm-” prefix in the /dev directory. It automatically maps paths to drives, using the drive’s unique World Wide ID (WWID). DM is dependent on the device-mapper, udev, and the device-mapper-multipath packages. DM multipathing can be initialized, using the following command: # /etc/init.d/multipathd start It is recommended that the DM be integrated into the boot sequence using the following commands: # chkconfig --add multipathd # chkconfig multipathd on Oracle Database 10g: ASM Overview Seminar
42
Configuring Multipath
multipaths { multipath { wwid 14f70656e66696c d54 alias ocr path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback manual no_path_retry } ... Configuring Multipath The /sbin/multipath command invokes the DM using the settings defined in the /etc/multipath.conf file, which can be edited using the format shown in the example in the slide. The /etc/multipath.conf file should contain the entries for all the required multipathed devices. After the /etc/multipath.conf file is edited, run /sbin/multipath –v2 to pick up the new configuration and create the new multipath device nodes with the following names: /dev/dm-N, and /dev/mapper/wwidN or /dev/mapper/your_alias. Note that the example in the slide defines a multipath for a block device with an alias of ocr. The wwid attribute can be determined using the /sbin/multipath –ll command: ~]# multipath -ll mpath1 (14f70656e66696c a f000000) [size=35 GB][features="0"][hwhandler="0"] \_ round-robin 0 [prio=1][active] \_ 24:0:0:0 sdb 8:16 [active][ready] Priority groups are the central components of the DM. Paths are grouped into an ordered list of Priority Groups. Each Priority Group has a Path Selector, which selects the Priority Group’s paths that are to be used for each I/O request. If an I/O request returns an error, then the path that is in use is disabled and an alternative path is tried. If all the paths in a Priority Group fail, then another Priority Group is selected. Oracle Database 10g: ASM Overview Seminar
43
Oracle Database 10g: ASM Overview Seminar 1-43
Device Mapper Devices DM devices are created as /dev/dm-n. DM maps only whole drives. If a drive has multiple partitions, the device mapping of each partition is handled by kpartx. If the device is partitioned, the partitions will appear as: /dev/mapper/mpathNpN /dev/mapper/<alias>pN /dev/mapper/<WWID>pN OCR and Voting disks should use /dev/dm-N or /dev/mapper/<alias>pN path formats. # cat /etc/udev/rules.d/40-multipath.rules KERNEL="dm-[0-9]*", PROGRAM="/sbin/mpath_get_name %M %m", \ RESULT="?*", NAME="%k", SYMLINK="mpath/%c" KERNEL="dm-[0-9]*", PROGRAM="/sbin/kpartx_get_name %M %m", \ RESULT="?*", NAME="%k", SYMLINK="mpath/%c" Device Mapper Devices DM devices use a lexical naming scheme: they are named /dev/dm-0, /dev/dm-1, /dev/dm-2, and so on. The DM handles only whole drives. If a drive has multiple partitions, the device mapping of each partition is handled by kpartx. For example, suppose a system has one drive with two paths: sdc and sdg. The drive has two partitions: sdc1/sdg1 and sdc2/sdg2. The DM and kpartx create the following devices: /dev/dm-0: sdc, sdg /dev/dm-1: sdc1, sdg1 /dev/dm-2: sdc2, sdg2 DM should be integrated with Udev. A Udev rule can be written so that persistent names are always created. If the /etc/multipath.conf alias for a device is specified, then the /sbin/dmsetup command will return that value. If the alias attribute is not specified, then the script will return the WWID of the device. The return value will be used to create the persistent path in /dev/mapper, where N is the partition number. Because of this, Oracle Clusterware can leverage Udev and DM for its OCR and Voting disks by using the dmsetup generated paths. It is recommended that the OCR and Voting disks use /dev/dm-N or /dev/mapper/alias path formats. Oracle Database 10g: ASM Overview Seminar
44
Oracle Database 10g: ASM Overview Seminar 1-44
ASMLibs An ASMLib is a storage management interface between the Oracle kernel and disk storage. You can load multiple ASMLibs. Purpose-built drivers can provide: Device discovery More efficient I/O interface Increased performance and reliability Oracle freely delivers an ASMLib on Linux. Several participating storage vendors such as HP and others are joining this initiative. ASMLibs ASMLib is a support library for the ASM feature. The objective of ASMLib is to provide a more streamlined and efficient mechanism for identifying and accessing block devices used by the ASM disk groups. This API serves as an alternative to the standard operating system interface. The ASMLib kernel driver is released under the GNU General Public License (GPL), and Oracle Corporation freely delivers an ASMLib for Linux platforms. This library is provided to enable ASM I/O to Linux disks without the limitations of the standard UNIX I/O API. The main ASMLib functions are grouped into three collections of functions: Device discovery functions must be implemented in any ASMLib. Discover strings usually contain a prefix identifying the ASMLib that this discover string is intended for. For the Linux ASMLib provided by Oracle, the prefix is ORCL:. I/O processing functions extend the operating system interface, and provide an optimized asynchronous interface for scheduling I/O operations and managing I/O operation completion events. These functions are implemented as a device driver within the operating system kernel. The performance and reliability functions use the I/O processing control structures for passing metadata between the Oracle database and the back-end storage devices. They enable additional intelligence on the part of back-end storage. Note: The database can load multiple ASMLibs, each handling different disks. Oracle Database 10g: ASM Overview Seminar
45
Oracle Linux ASMLib Installation: Overview
1. Install the ASMLib packages on each node: Install from or set up Oracle channel for yum or up2date. Install oracleasm-support, oracleasmlib, and kernel-related packages. 2. Configure ASMLib on each node: Load ASM driver and mount ASM driver file system. Use the oracleasm script with configure option. 3. Make disks available to ASMLib by marking disks using oracleasm createdisk on one node. 4. Make sure that disks are visible on other nodes using oracleasm scandisks. 5. Use appropriate discovery strings for this ASMLib. Oracle Linux ASMLib Installation: Overview You can download the Oracle ASMLib software from the Oracle Technology Network (OTN) Web site, or set up your Oracle channel for using yum or up2date. There are three packages for each Linux platform. The two essential packages are the oracleasmlib package, which provides the actual ASM library, and the oracleasm support package, which provides the utilities to configure and enable the ASM driver. The remaining package provides the kernel driver for the ASMLib. After the ASMLib software is installed, you need to make the ASM driver available by executing the /etc/init.d/oracleasm configure command. This operation creates the /dev/oracleasm mount point used by the ASMLib to communicate with the ASM driver. When using RAC, installation and configuration must be completed on all nodes of the cluster. To place a disk under ASM management, it must first be marked to prevent inadvertent use of incorrect disks by ASM. This is accomplished by using the /etc/init.d/oracleasm createdisk command. With RAC, this operation needs to be performed only on one node because it is a shared-disk architecture. However, the other nodes in the cluster must ensure that the disk is seen and valid. Therefore, the other nodes in the cluster must execute the /etc/init.d/oracleasm scandisks command. After the disks are marked, the ASM initialization parameter can be set to appropriate values. Oracle Database 10g: ASM Overview Seminar
46
Oracle Linux ASMLib Installation
Install the packages as the root user: Run oracleasm with the configure option: Provide oracle UID as the driver owner. Provide dba GID as the group of the driver. Load the driver at system startup. # rpm –Uvh oracleasm* Preparing… ##################### [100%] 1: oracleasm-support ##################### [ 33%] 2: oracleasm EL ##################### [ 67%] 3: oracleasmlib ##################### [100%] # /etc/init.d/oracleasm configure Oracle Linux ASMLib Installation If you do not use the ASM library driver, you must bind each disk device that you want to use to a raw device. To install and configure the ASM library driver and utilities, perform the following steps: 1. Enter the following command to determine the kernel version and architecture of the system: # uname -rm 2. If necessary, download the required ASM library driver packages from the OTN Web site. You must download the following three packages, where version is the version of the ASM library driver, arch is the system architecture, and kernel is the kernel version you are using: oracleasm-support-version.arch.rpm oracleasm-kernel-version.arch.rpm oracleasmlib-version.arch.rpm 3. Install the proper packages for your platform. For example, if you are using the Red Hat Enterprise Linux AS 3.0 enterprise kernel, enter a command similar to the following: # rpm -i oracleasm-support i386.rpm \ oracleasm enterprise i686.rpm \ oracleasmlib i386.rpm # /etc/init.d/oracleasm enable Oracle Database 10g: ASM Overview Seminar
47
ASM Library Disk Creation
Identify the device name for the disks that you want to use with the fdisk –l command. Create a single whole-disk partition on the disk device with fdisk. Enter a command similar to the following to mark the shared disk as an ASM disk: To make the disk available on the other nodes, enter the following as root on each node: Set the ASM_DISKSTRING parameter. /etc/init.d/oracleasm createdisk disk1 /dev/sdbn # /etc/init.d/oracleasm scandisks ASM Library Disk Creation To configure the disk devices that you want to use in an ASM disk group, complete the following steps: 1. If necessary, install the shared disks that you intend to use for the disk group and restart the system. 2. To identify the device name for the disks that you want to use, enter the following command: # /sbin/fdisk –l 3. Using fdisk, create a single whole-disk partition on the device that you want to use. 4. Enter a command similar to the following to mark a disk as an ASM disk: # /etc/init.d/oracleasm createdisk disk1 /dev/sdb1 In this example, disk1 is the tag or name that you want to assign to the disk. 5. To make the disk available on other cluster nodes, enter the following command as root on each node: # /etc/init.d/oracleasm scandisks This command identifies all the shared disks attached to the node that are marked as ASM disks. Oracle Database 10g: ASM Overview Seminar
48
Oracle Database 10g: ASM Overview Seminar 1-48
Summary In this lesson, you should have learned to: Choose appropriate ASM disks Describe the purpose of ASMLIB Install ASMLIB Oracle Database 10g: ASM Overview Seminar
49
ASM Installation and Configuration
50
Oracle Database 10g: ASM Overview Seminar 1-50
Objectives After completing this lesson, you should be able to: Install ASM Create additional ASM instances List the configuration parameters for ASM Oracle Database 10g: ASM Overview Seminar
51
ASM Installation: Best Practices
Install ASM in a separate ORACLE_HOME from the ORACLE_HOME database: Provides higher availability and manageability. Allows independent upgrades of the database and ASM. Deinstallation of the database software can be performed without impacting the ASM instance. Create one ASM instance per node. ASM Installation: Best Practices Install ASM in a separate ORACLE_HOME from the ORACLE_HOME(s) database. Creating the ASM instance from the same ORACLE_HOME as the database creates a single point of failure. A separate ORACLE_HOME enables independent upgrades of the ASM and the database. The ORACLE_HOMEs database can be deinstalled without affecting the ASM instance. Create only one ASM instance per node. The ASM instance will manage storage for all databases on the node. Oracle Database 10g: ASM Overview Seminar
52
Installing Automatic Storage Management
$ id oracle $ /cdrom/database/runInstaller Installing ASM For this installation, use ASM to manage the shared storage for the cluster database. After Oracle Clusterware is installed, use the database installation CD to run Oracle Universal Installer (OUI) and install ASM as the oracle user. $ id oracle $ cd /cdrom/database $./runInstaller When the Welcome screen appears, click the Next button to continue. Oracle Database 10g: ASM Overview Seminar
53
Oracle Database 10g: ASM Overview Seminar 1-53
Installation Type Installation Type When the Installation Type screen appears, select your installation type by clicking the Enterprise Edition option button. Click the Next button to proceed. Oracle Database 10g: ASM Overview Seminar
54
Oracle Database 10g: ASM Overview Seminar 1-54
Specify Home Details Specify Home Details The next screen that appears is the Specify Home Details screen. Here you specify the location of your ASM home directory and the installation name. Although it is possible for ASM and the database installation to reside in the same directory and use the same files, you are installing ASM separately, into its own ORACLE_HOME, to prevent the database ORACLE_HOME from being a point of failure for the ASM disk groups, and to prevent versioning difficulties between the ASM and the database file installations. Be sure to specify a name for your installation that reflects this. When you have finished, click the Next button to continue. Oracle Database 10g: ASM Overview Seminar
55
Hardware Cluster Installation Mode
When the Specify Hardware Cluster Installation Mode screen appears, click the Cluster Installation option button. Next, ensure that all nodes in your cluster are selected by clicking the Select All button. If OUI does not display the nodes properly, perform clusterware diagnostics by executing the olsnodes -v command from the ORA_CRS_HOME/bin directory, and analyze its output. Alternatively, you may use the cluvfy utility to troubleshoot your environment. Refer to your documentation if the detailed output indicates that your clusterware is not running properly. When this is done, click the Next button to continue. Oracle Database 10g: ASM Overview Seminar
56
Product-Specific Prerequisite Checks
The Product-Specific Prerequisite Checks screen verifies the operating system requirements that must be met for the installation to be successful. After each successful check, the Succeeded check box is selected for that test. The test suite results are displayed at the bottom of the screen. Any tests that fail are also reported here. The example in the slide shows the results of a completely successful test suite. If you encounter any failures, try opening another terminal window and correct the deficiency from another terminal window. Then return to OUI, and click the Retry button to rerun the tests. It is possible to bypass the errors that are flagged by selecting the check box next to the error, but this is not recommended unless you are absolutely sure that the reported error will not affect the installation. When all tests have succeeded, click the Next button to continue. Oracle Database 10g: ASM Overview Seminar
57
Select Configuration Option
You can choose from the following options on the Select Configuration Option screen: Install database software and create a database Configure ASM Install database software only (no database creation) This installation is concerned only with installing and configuring ASM, so click the Configure Automatic Storage Management button and provide the password for the ASM SYS user. When you have done this, click the Next button to proceed. Oracle Database 10g: ASM Overview Seminar
58
Oracle Database 10g: ASM Overview Seminar 1-58
Configure ASM Configure ASM The Configure Automatic Storage Management screen appears next. You need to provide at least one disk group name. Next, select the appropriate level of redundancy for your ASM disk group. The actual storage area available for the disk group depends on the redundancy level chosen. The greater the redundancy level chosen, the larger is the amount of disk space required for the management overhead. For disk groups that use external redundancy, ASM employs no mirroring. For normal-redundancy disk groups, ASM uses two-way file mirroring. For high-redundancy disk groups, ASM employs three-way file mirroring. After you have chosen the disk group redundancy, select the files that will make up your new disk group. If the files you intend to use are not listed, click Change Disk Discovery Path and select the proper directory. When you have finished, click the Next button to continue. Oracle Database 10g: ASM Overview Seminar
59
Oracle Database 10g: ASM Overview Seminar 1-59
Summary Summary The Summary screen appears next. You may scan the installation tree to verify your choices if you like. When you have finished, click the Install button to proceed. Oracle Database 10g: ASM Overview Seminar
60
Installation Progress
After clicking the Install button, you can monitor the progress of the installation on the Install screen. After installing the files and linking the executables on the first node, the installer copies the installation to the remaining nodes. When the installation progress reaches 100 percent, OUI prompts you to execute configuration scripts on all nodes. Oracle Database 10g: ASM Overview Seminar
61
Execute Configuration Scripts
The next screen that appears prompts you to run the root.sh script on the specified nodes. Open a terminal window for each node listed and run the root.sh script as the root user from the specified directory. When you have finished, return to the “Execute Configuration scripts” window and click the OK button to continue. Oracle Database 10g: ASM Overview Seminar
62
End of ASM Installation
When the installation is finished, the End of Installation screen appears. The window displays the iSQL*Plus URLs. Make note of these URLs, if you intend to use iSQL*Plus, and click the Exit button to quit. Oracle Database 10g: ASM Overview Seminar
63
DBCA and Storage Options
To support ASM as a storage option, this screen appears in the Database Configuration Assistant (DBCA) when creating a database. This allows you to choose the storage options: file system, ASM, or raw devices. Oracle Database 10g: ASM Overview Seminar
64
Creating an ASM Instance
Use the DBCA to create the ASM instance. Select Configure ASM. Run the $ORACLE_HOME/bin/localconfig add command when prompted: Default parameters for the initialization parameter file are sufficient for a single instance database. Create two disk groups: Database Flash recovery area $ su – # /u01/app/oracle/product/10.2.0/db_1/bin/localconfig add Creating an ASM Instance An ASM instance manages ASM disk groups. Before starting or creating a database instance, which uses ASM to manage its disks, the ASM instance must be running. When you choose ASM as a database storage mechanism in the DBCA, an ASM instance is created and started. You then assign disks to disk groups within the DBCA. If you create multiple single-instance databases on a node, a single ASM instance can handle all the disk groups for the databases on the node. It is recommended that you use ASM for database files and the flash recovery area. Separate disk groups may be created for each. Oracle Database 10g: ASM Overview Seminar
65
Creating an ASM Instance
Creating an ASM Instance (continued) You create an ASM instance by running the DBCA. On the first screen, choose the option to Configure ASM, and follow the steps. The ASM instance is created and started. Then, you are guided through the process of defining disk groups for the instance. As part of the ASM instance creation process, the DBCA automatically creates an entry into the oratab file. This entry is used for discovery purposes. On the Windows platform, where a services mechanism is used, the DBCA automatically creates an Oracle Service and the appropriate registry entry to facilitate the discovery of ASM instances. When an ASM instance is configured, the DBCA creates an ASM instance parameter file and an ASM instance password file. If you were to first create an ASM-enabled database, the DBCA determines whether an ASM instance already exists on your host. If ASM instance discovery returns an empty list, the DBCA creates a new ASM instance. Oracle Database 10g: ASM Overview Seminar
66
RAC and ASM Instances Creation
. . RAC and ASM Instances Creation When using the DBCA to create ASM instances on your cluster, you need to follow the same steps as that for a single-instance environment. The only exception is for the first and third steps. You must select the Oracle RAC database option in the first step, and then select all nodes of your cluster. The DBCA automatically creates one ASM instance on each selected node. The first instance is called +ASM1, the second +ASM2, and so on. In the DBCA silent mode to create an ASM instance, or to manage ASM disk groups, you can use the following syntax: dbca -silent -nodeList nodelist -configureASM -asmSysPassword asm_pwd [-diskString disk_discovery_string] [-diskList disk_list] [-diskGroupName dgname] [-redundancy redundancy_option] [-recoveryDiskList recov_disk_list] [-recoveryGroupName recovery_dgname] [-recoveryGroupRedundancy redundancy_option] [-emConfiguration CENTRAL|NONE] [-centralAgent agent_home] where the list of parameters is self-explanatory. Oracle Database 10g: ASM Overview Seminar
67
ASM and ASMLib Installation and Configuration Summary
Download and install the ASMLib RPMs. Run oracleasm configure. Run oracleasm enable. Create a single whole-disk partition on each disk. Run partprobe to ensure that the kernel is aware of the new partitions. Mark disks as ASM disks. Run oracleasm scandisks to make the disks available. Create the ASM instance using the DBCA. Oracle Database 10g: ASM Overview Seminar
68
Oracle Database 10g: ASM Overview Seminar 1-68
Demonstrations Install ASM in its own home directory in a RAC. Note You can always replay these demonstrations from: Oracle Database 10g: ASM Overview Seminar
69
Oracle Database 10g: ASM Overview Seminar 1-69
Summary In this lesson, you should should have learned how to: Install ASM Create additional ASM instances List the configuration parameters for ASM Oracle Database 10g: ASM Overview Seminar
70
Administering ASM
71
Oracle Database 10g: ASM Overview Seminar 1-71
Objectives After completing this lesson, you should be able to: Administer ASM instances Use ASM initialization parameters Administer disk groups Administer failgroups Migrate a database to ASM Oracle Database 10g: ASM Overview Seminar
72
ASM Instance Initialization Parameters
INSTANCE_TYPE = ASM DB_UNIQUE_NAME = +ASM ASM_POWER_LIMIT = 1 ASM_DISKSTRING = '/dev/rdsk/*s2', '/dev/rdsk/c1*' ASM_DISKGROUPS = dgroupA, dgroupB LARGE_POOL_SIZE = 8MB ASM Instance Initialization Parameters INSTANCE_TYPE should be set to ASM for ASM instances. DB_UNIQUE_NAME specifies the name of the service provider for which this ASM instance manages disk groups. The default value of +ASM must be modified only if you run multiple ASM instances on the same node. ASM_POWER_LIMIT controls the speed for a rebalance operation. Values range from 1 through 11, with 11 being the fastest. If omitted, this value defaults to 1. The number of slaves is derived from the parallelization level specified in a manual rebalance command (POWER), or by the ASM_POWER_LIMIT parameter. ASM_DISKSTRING is an operating system–dependent value used by ASM to limit the set of disks considered for discovery. ASM_DISK_GROUPS is the list of names of disk groups to be mounted by an ASM instance at startup, or when the ALTER DISKGROUP ALL MOUNT command is used. The INSTANCE_TYPE parameter is the only parameter that you must define. All other ASM parameters have default values that are suitable for most environments. Note: If the ASM environment has been created using the command line instead of Enterprise Manager, then the disk groups must be created before they can be mounted. Oracle Database 10g: ASM Overview Seminar
73
Database Instance Parameter
… INSTANCE_TYPE = RDBMS LOG_ARCHIVE_FORMAT DB_BLOCK_SIZE DB_CREATE_ONLINE_LOG_DEST_n DB_CREATE_FILE_DEST DB_RECOVERY_FILE_DEST CONTROL_FILES LOG_ARCHIVE_DEST_n LOG_ARCHIVE_DEST STANDBY_ARCHIVE_DEST LARGE_POOL_SIZE = 8MB Database Instance Parameter INSTANCE_TYPE defaults to RDBMS and specifies that this instance is an RDBMS instance. LOG_ARCHIVE_FORMAT is ignored if LOG_ARCHIVE_DEST is set to an incomplete ASM file name, such as +dGroupA. If LOG_ARCHIVE_DEST is set to an ASM directory (for example, +dGroupA/myarchlogdir/), then LOG_ARCHIVE_FORMAT is used and the files are non-OMF. Unique file names for archived logs are automatically created by the Oracle database. The following parameters accept the multifile creation context form of ASM file names as a destination: DB_CREATE_ONLINE_LOG_DEST_n DB_CREATE_FILE_DEST DB_RECOVERY_FILE_DEST CONTROL_FILES LOG_ARCHIVE_DEST_n LOG_ARCHIVE_DEST STANDBY_ARCHIVE_DEST Note: Because allocation unit maps for ASM files are allocated from the LARGE_POOL, you must set the LARGE_POOL_SIZE initialization parameter to at least 8 MB, preferably higher. Oracle Database 10g: ASM Overview Seminar
74
Starting Up an ASM Instance
$ export ORACLE_SID='+ASM' $ sqlplus /nolog SQL> CONNECT / AS sysdba Connected to an idle instance. SQL> STARTUP; ASM instance started Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes ASM diskgroups mounted Starting Up an ASM Instance ASM instances are started in the same way as database instances except that the initialization parameter file contains an entry like INSTANCE_TYPE=ASM. When this parameter is set to the ASM value, it informs the Oracle executable that an ASM instance is starting, not a database instance. Also, the ORACLE_SID variable must be set to the ASM instance name. When the ASM instance starts up, the mount stage attempts to mount the disk groups specified by the ASM_DISKGROUPS initialization parameter rather than mounting a database, as is done with non-ASM instances. Other STARTUP clauses have comparable interpretation for ASM instances as they do for database instances. OPEN is invalid for an ASM instance. NOMOUNT starts up the ASM instance without mounting any disk group. Oracle Database 10g: ASM Overview Seminar
75
Accessing an ASM Instance
AS SYSDBA AS SYSOPER All operations Nondestructive operations Disk group Disk group Accessing an ASM Instance ASM instances do not have a data dictionary; therefore the only way to connect to one is by using OS authentication, that is, SYSDBA or SYSOPER. To connect remotely, a password file must be used. Normally, the SYSDBA privilege is granted through the use of an operating system group. On UNIX, this is typically the dba group. By default, members of the dba group have SYSDBA privilege on all instances on the node, including the ASM instance. Users who connect to the ASM instance with the SYSDBA privilege have administrative access to all disk groups in the system. The SYSOPER privilege is supported in ASM instances and limits the set of allowable SQL commands to the minimum required for basic operation of an already configured system. Storage system Oracle Database 10g: ASM Overview Seminar
76
Shutting Down an ASM Instance
Database instance A Database instance B 2 ASM instance 3 SHUTDOWN NORMAL Shutting Down an ASM Instance When you attempt to shutdown an ASM instance in the NORMAL, IMMEDIATE, or TRANSACTIONAL modes, it will succeed only if there are no database instances connected to the ASM instance. If there is at least one connected instance, you will receive the following error: ORA-15097: cannot SHUTDOWN ASM instance with connected RDBMS instance If you perform a SHUTDOWN ABORT on the ASM instance, it will shutdown, but it will require recovery at the time of the next startup. Any connected database instances will also eventually shutdown, reporting the following error: ORA-15064: communication failure with ASM instance In a single ASM instance configuration, if the ASM instance fails while disk groups are open for update, then after the ASM instance reinitializes, it reads the disk group’s log and recovers all transient changes. With multiple ASM instances sharing disk groups, if one ASM instance fails, another ASM instance automatically recovers transient ASM metadata changes caused by the failed instance. The failure of a database instance does not affect ASM instances. The ASM instance should be started automatically whenever the host is rebooted. ASM instance is expected to use the automatic startup mechanism supported by the underlying operating system. Note that file system failure usually crashes a node. 1 1 Oracle Database 10g: ASM Overview Seminar
77
Oracle Database 10g: ASM Overview Seminar 1-77
Managing Disk Groups CREATE DISKGROUP ASM instance DROP DISKGROUP Database instance Managing Disk Groups The main goal of an ASM instance is to manage disk groups and protect their data. ASM instances also communicate file layout to database instances. In this way, database instances can directly access files stored in disk groups. There are several new disk group administrative commands. They all require the SYSDBA privilege and must be issued from an ASM instance. You can add new disk groups. You can also modify existing disk groups to add new disks, remove existing ones, and perform many other operations. ALTER DISKGROUP Oracle Database 10g: ASM Overview Seminar
78
Creating and Dropping Disk Groups
CREATE DISKGROUP dgroupA NORMAL REDUNDANCY FAILGROUP controller1 DISK '/devices/A1' NAME diskA1 SIZE 120G FORCE, '/devices/A2', '/devices/A3' FAILGROUP controller2 DISK '/devices/B1', '/devices/B2', '/devices/B3'; Creating and Dropping Disk Groups Assume that ASM disk discovery identified the following disks in the /devices directory: A1, A2, A3, B1, B2, and B3. Also, assume that disks A1, A2, and A3 are on a separate SCSI controller from disks B1, B2, and B3. The first example in the slide illustrates how to configure a disk group called DGROUPA with two failure groups: CONTROLLER1 and CONTROLLER2. The example also uses the default redundancy characteristic, NORMAL REDUNDANCY, for the disk group. You can optionally provide a disk name and size for the disk. If you do not supply this information, ASM creates a default name and attempts to determine the size of the disk. If the size cannot be determined, an error is returned. FORCE indicates that a specified disk should be added to the specified disk group even though the disk is already formatted as a member of an ASM disk group. Using the FORCE option for a disk that is not formatted as a member of an ASM disk group returns an error. As shown by the second statement in the slide, you can delete a disk group along with all of its files. If the disk group contains any files besides internal ASM metadata, the INCLUDING CONTENTS option must be specified to avoid accidental deletions. Further, the disk group must be mounted for it to be dropped. After ensuring that none of the disk group files is open, the group and all its drives are removed from the disk group. Then the header of each disk is overwritten to eliminate the ASM formatting information. DROP DISKGROUP dgroupA INCLUDING CONTENTS; Oracle Database 10g: ASM Overview Seminar
79
Adding Disks to Disk Groups
ALTER DISKGROUP dgroupA ADD DISK '/dev/rdsk/c0t4d0s2' NAME A5, '/dev/rdsk/c0t5d0s2' NAME A6, '/dev/rdsk/c0t6d0s2' NAME A7, '/dev/rdsk/c0t7d0s2' NAME A8; ALTER DISKGROUP dgroupA ADD DISK '/devices/A*'; Disk formatting Adding Disks to Disk Groups This example shows how to add disks to a disk group. You execute an ALTER DISKGROUP ADD DISK command to add the disks. The first statement adds four new disks to the DGROUPA disk group. The second statement demonstrates the interactions of discovery strings. Consider the following configuration: /devices/A1 is a member of disk group DGROUPA. /devices/A2 is a member of disk group DGROUPA. /devices/A3 is a member of disk group DGROUPA. /devices/A4 is a candidate disk. The second command adds A4 to the DGROUPA disk group. It ignores the other disks, even though they match the discovery string, because they are already part of the DGROUPA disk group. As shown by the diagram, when you add a disk to a disk group, the ASM instance ensures that the disk is addressable and usable. The disk is then formatted and rebalanced. The rebalance process is time consuming because it moves AUs from every file onto the new disk. Disk group rebalancing Oracle Database 10g: ASM Overview Seminar
80
Miscellaneous ALTER Commands
Remove a disk from dgroupA: Add and drop a disk in a single command: Cancel a disk drop operation: ALTER DISKGROUP dgroupA DROP DISK A5; ALTER DISKGROUP dgroupA DROP DISK A6 ADD FAILGROUP fred DISK '/dev/rdsk/c0t8d0s2' NAME A9; ALTER DISKGROUP dgroupA UNDROP DISKS; Miscellaneous ALTER Commands The first statement in the slide shows how to remove one of the disks from the DGROUPA disk group. The second statement shows how you can add and drop a disk in a single command. The advantage in this case is that rebalancing is not started until the command completes. The third statement shows how to cancel a disk drop operation. The UNDROP command operates only on pending drops of disks; it has no effect on drops that have completed. The following statement rebalances the DGROUPB disk group, if necessary: ALTER DISKGROUP dgroupB REBALANCE POWER 5; This command is generally not necessary because rebalancing is automatically done as disks are added, dropped, or resized. However, it is useful if you want to use the POWER clause to override the default speed defined by the initialization parameter ASM_POWER_LIMIT. You can change the power level of an ongoing rebalance operation by reentering the command with a new level. A power level of zero causes rebalancing to halt until the command is either implicitly or explicitly reinvoked. The following statement dismounts DGROUPA: ALTER DISKGROUP dgroupA DISMOUNT; The MOUNT and DISMOUNT options allow you to make one or more disk groups available or unavailable to the database instances. Oracle Database 10g: ASM Overview Seminar
81
ASM Files CREATE TABLESPACE sample DATAFILE '+dgroupA'; Database file
RMAN 1 Automatic ASM file creation 2 3 4 1 2 3 4 ASM Files When you specify an ASM disk group as the data file name for a tablespace, ASM files are created in the disk group to provide storage for the tablespace. When an ASM file is created, certain file attributes are permanently set. Among these are its protection policy, and its striping policy. ASM files are Oracle Managed Files. Any file that is created by ASM is automatically deleted when it is no longer needed. With ASM, file operations are specified in terms of database objects. Administration of databases does not require knowledge of the name of a file, though the name of the file is exposed through some data dictionary views, or the ALTER DATABASE BACKUP CONTROLFILE TO TRACE command. Because each file in a disk group is physically spread across all disks in the disk group, backup of a single disk is not useful. Database backups of ASM files must be made with RMAN. Note: ASM does not manage binaries, alert logs, trace files, or password files. ASM file automatically spread inside disk group dgroupA Oracle Database 10g: ASM Overview Seminar
82
Oracle Database 10g: ASM Overview Seminar 1-82
ASMCMD Utility SQL> CREATE TABLESPACE tbsasm DATAFILE '+DGROUP1' SIZE 100M; Tablespace created. SQL> CREATE TABLESPACE hrapps DATAFILE '+DGROUP1' SIZE 10M; $ asmcmd ASMCMD> ls -l DGROUP1/ORCL/DATAFILE Type Redund Striped Time Sys Name DATAFILE MIRROR COARSE OCT 05 21:00:00 Y HRAPPS DATAFILE MIRROR COARSE OCT 05 21:00:00 Y TBSASM ASMCMD> ASMCMD Utility ASMCMD is a command-line utility that you can use to easily view and manipulate files, and directories within ASM disk groups. It can list the contents of disk groups, perform searches, create and remove directories, and display space utilization, among other things. Note: For more information about ASMCMD, see the Oracle Database Utilities documentation. Oracle Database 10g: ASM Overview Seminar
83
Migrating Your Database to ASM Storage
1. Shut down your database cleanly. 2. Modify your server parameter file to use OMF. 3. Edit and execute the following RMAN script: STARTUP NOMOUNT; RESTORE CONTROLFILE FROM '/u1/c1.ctl'; ALTER DATABASE MOUNT; BACKUP AS COPY DATABASE FORMAT '+dgroup1'; SWITCH DATABASE TO COPY; SQL "ALTER DATABASE RENAME '/u1/log1' TO '+dgroup1' "; # Repeat RENAME command for all online redo log members ... ALTER DATABASE OPEN RESETLOGS; SQL "ALTER DATABASE TEMPFILE '/u1/temp1' DROP"; Migrating Your Database to ASM Storage Because ASM files cannot be accessed through normal operating system interfaces, RMAN is the only means for copying ASM files. Although files in a tablespace may be both ASM files and non-ASM files because of the tablespace history, RMAN commands enable non-ASM files to be relocated to an ASM disk group. You can use the following procedure to relocate your entire database to an ASM disk group: (It is assumed that you are using a server parameter file.) 1. Obtain the file names of the current control files and online redo logs by using V$CONTROLFILE and V$LOGFILE. 2. Shut down the database cleanly. Modify the server parameter file of your database as follows: Set the necessary OMF destination parameters to the desired ASM disk group. Remove the CONTROL_FILES parameter. Oracle Database 10g: ASM Overview Seminar
84
How Many ASM Disk Groups Per Database
Two disk groups are recommended Leverage maximum of LUNs Backups can be stored on one FRA disk group Lower performance may be used for FRA (or inner tracks) Exceptions Additional disk groups for different capacity or performance characteristics Different ILM storage tiers Data DG FRA DG ERP DB CRM DB HR DB How Many ASM Disk Groups Per Database Often, only two disk groups are enough to share the storage between multiple databases. This way you can maximize the number of LUNs used as ASM disks, which gives you the best performance, especially if these LUNs are carved on the outer edge of your disks. When you have a second disk group, you can use it as your common flash recovery area (FRA) for a backup of your data. You can put the corresponding LUNs on the inner edge of your disks because less performance is necessary. The two noticeable exceptions to this rule are whenever you are using disks with different capacity or performance characteristics, or when you want to archive your data on lower-end disks for Information Lifecycle Management (ILM) purposes. Oracle Database 10g: ASM Overview Seminar
85
Disk Group: Best Practices
Create two disk groups: Database area Flash recovery area Create disk groups using large numbers of similar type disks: Same size characteristics Same performance characteristics Disk Group: Best Practices Create two disk groups: one for the database area and the other for the flash recovery area. Though more disk groups can be created, you will find it easier to manage two disk groups. ASM performs load balancing. Using disks with the same size and performance characteristics will assist ASM in properly balancing the files across the disks. Oracle Database 10g: ASM Overview Seminar
86
Oracle Database 10g: ASM Overview Seminar 1-86
ASM Scalability ASM imposes the following limits: 63 disk groups 10,000 ASM disks 4 petabyte per ASM disk 40 exabyte of storage 1 million files per disk group Maximum file size: External redundancy: 35 TB Normal redundancy: 5.8 TB High redundancy: 3.9 TB ASM Scalability ASM imposes the following limits: 63 disk groups in a storage system 10,000 ASM disks in a storage system 4 petabyte maximum storage for each ASM disk 40 exabyte maximum storage for each storage system 1 million files for each disk group Maximum file sizes depending on the redundancy type of the disk groups used: 35 TB for external redundancy, 5.8 TB for normal redundancy, and 3.9 TB for high redundancy Oracle Database 10g: ASM Overview Seminar
87
Oracle Database 10g: ASM Overview Seminar 1-87
ASM Home Page ASM Home Page Enterprise Manager provides a user-friendly graphical interface to Oracle database management, administration, and monitoring tasks. Oracle Database 10g extends the existing functionality to transparently support the management, administration, and monitoring of Oracle databases that use ASM storage. It also adds support for the new management tasks required for administration of ASM instance and ASM disk groups. The ASM Home Page shows the status of the ASM instance along with the metrics and alerts generated by the collection mechanisms. It also provides the startup and shutdown functionality. Clicking the Alert link takes the user to an alert details page. The Disk Group Usage chart shows the space used by each client database along with the free space. Oracle Database 10g: ASM Overview Seminar
88
Oracle Database 10g: ASM Overview Seminar 1-88
ASM Performance Page ASM Performance Page The Performance tab of the ASM page shows the I/O response time and the throughput for each disk group. You can further drill down to view disk-level performance metrics. Oracle Database 10g: ASM Overview Seminar
89
ASM Configuration Page
Using the Configuration tab of the ASM page, you can view or modify the initialization parameters of the ASM instance. Oracle Database 10g: ASM Overview Seminar
90
ASM Administration Page
When you click the Administration tabbed page of the ASM page, you can see the disk groups listed in the V$ASM_DISKGROUP view. From here, you can create, edit, or drop a disk group. You can also perform disk group operations such as mount, dismount, rebalance, check, and repair on a selected disk group. Oracle Database 10g: ASM Overview Seminar
91
Oracle Database 10g: ASM Overview Seminar 1-91
Create Disk Group Page Create Disk Group Page Clicking Create on the Disk Group Overview page displays the Create Disk Group page. You can enter the disk group name, redundancy mechanism, and the list of disks that you would like to include in the new disk group. The list of disks is obtained from the V$ASM_DISK fixed view. By default, only those disks that can be assigned to a disk group show up. Those are the ones with a status of one of the following: CANDIDATE: The disk has never been assigned to an ASM disk group. FORMER: The disk was once assigned to an ASM disk group, but is not now. PROVISIONED: ASMLib is being used, and this disk is not yet assigned to a disk group. Note: ASMLib is an API that interfaces with other vendors’ storage arrays. See the Database Administrator’s Guide documentation for more information about ASMLib. Oracle Database 10g: ASM Overview Seminar
92
Oracle Database 10g: ASM Overview Seminar 1-92
Summary In this lesson, you should have learned to: Administer ASM instances Use ASM initialization parameters Administer disk groups Administer failgroups Migrate a database to ASM Oracle Database 10g: ASM Overview Seminar
93
Administering Clustered ASM
94
Oracle Database 10g: ASM Overview Seminar 1-94
Objectives After completing this lesson, you should be able to: Administer ASM with RAC Use SRVCTL for clustered ASM Describe ASM recovery Oracle Database 10g: ASM Overview Seminar
95
Clustered ASM Consolidation
Allows shared storage across several databases RAC and single instance databases can use the same ASM instance Benefits: Simplified and centralized management Higher storage utilization Higher performance Payroll GL Payroll & GL Database Storage Consolidation In Oracle Database 10g, Release 2, Oracle Clusterware does not require an Oracle RAC license. Oracle Clusterware is now available with ASM and single instance Oracle Database 10g. It supports a shared clustered pool of storage for RAC and single instance Oracle databases. This enables you to optimize your storage utilization by eliminating wasted overprovisioned storage. This is illustrated in the slide where, instead of having various pools of disks for different databases, you consolidate all that in a single pool shared by all your databases. By doing this, you can reduce the number of LUNs to manage by increasing their sizes, which gives you higher storage utilization as well as higher performance. Note: RAC and single instance databases could not be managed by the same ASM instance in Oracle Database 10g, Release 1. … … … 10 x 50 GB 10 x 50 GB 10 x 100 GB Oracle Database 10g: ASM Overview Seminar
96
ASM Instance Initialization Parameters and RAC
CLUSTER_DATABASE: This parameter must be set to TRUE. ASM_DISKGROUP: Multiple instances can have different values. Shared disk groups must be mounted by each ASM instance. ASM_DISKSTRING: With shared disk groups, every instance should be able to see the common pool of physical disks. ASM_POWER_LIMIT: Multiple instances can have different values. ASM Instance Initialization Parameters and RAC To enable ASM instances to be clustered together in a RAC environment, each ASM instance initialization parameter file must set its CLUSTER_DATABASE parameter to TRUE. This enables the global cache services to be started on each ASM instance. Although it is possible for multiple ASM instances to have different values for their ASM_DISKGROUPS parameter, it is recommended that each ASM instance mount the same set of disk groups. This enables disk groups to be shared among ASM instances for recovery purposes. In addition, all disk groups used to store one RAC database must be shared by all ASM instances in the cluster. Consequently, if you are sharing disk groups among ASM instances, their ASM_DISKSTRING initialization parameter must point to the same set of physical media. However, this parameter does not need to have the same setting on each node. For example, assume that the physical disks of a disk group are mapped by the OS on node A as /dev/rdsk/c1t1d0s2, and on node B as /dev/rdsk/c2t1d0s2. Although both nodes have different disk string settings, they locate the same devices via the OS mappings. This situation can occur when the hardware configurations of node A and node B are different—for example, when nodes are using different controllers as in the example shown in the slide. ASM handles this situation because it inspects the contents of the disk header block to determine the disk group to which it belongs, rather than attempting to maintain a fixed list of path names. Oracle Database 10g: ASM Overview Seminar
97
ASM Instance and Crash Recovery in RAC
ASM instance recovery Both instances mount disk group Disk group repaired by surviving instance ASM instance failure Node1 Node2 Node1 Node2 Node1 Node2 +ASM1 +ASM2 +ASM1 +ASM2 +ASM2 Disk group A Disk group A Disk group A ASM crash recovery Only one instance mounts disk group Disk group repaired when next mounted ASM instance failure Node1 Node2 Node1 Node2 Node1 Node2 +ASM1 +ASM2 +ASM1 +ASM2 +ASM2 ASM Instance and Crash Recovery in RAC Each disk group is self-describing, containing its own file directory, disk directory, and other data such as metadata logging information. ASM automatically protects its metadata by using mirroring techniques even with external redundancy disk groups. With multiple ASM instances mounting the same disk groups, if one ASM instance fails, another ASM instance automatically recovers the transient ASM metadata changes caused by the failed instance. This situation is called ASM instance recovery, and is automatically and immediately detected by the global cache services. With multiple ASM instances mounting different disk groups, or in the case of a single ASM instance configuration, if an ASM instance fails when ASM metadata is open for update, then the disk groups that are not currently mounted by any other ASM instance are not recovered until they are mounted again. When an ASM instance mounts a failed disk group, it reads the disk group log and recovers all the transient changes. This situation is called ASM crash recovery. Therefore, when using ASM clustered instances, it is recommended to have all ASM instances always mounting the same set of disk groups. However, it is possible to have a disk group on locally attached disks that are visible to only one node in a cluster, and have that disk group mounted on only that node where the disks are attached. Note: The failure of an Oracle database instance is not significant here because only ASM instances update ASM metadata. Disk Group A Disk Group A Disk Group A Oracle Database 10g: ASM Overview Seminar
98
Oracle Database 10g: ASM Overview Seminar 1-98
ASM and SRVCTL with RAC SRVCTL enables you to manage ASM from an Oracle Clusterware (OC) perspective: Add an ASM instance to OC. Enable an ASM instance for OC automatic restart. Start up an ASM instance. Shut down an ASM instance. Disable an ASM instance from OC automatic restart. Remove an ASM instance configuration from OCR. Obtain status of Clusterware-managed applications. Set ASM instance dependency to database instance. Using the DBCA, you can create ASM instances, and also add and enable them with OC. ASM and SRVCTL with RAC You can use SRVCTL to perform the following ASM administration tasks: The ADD option adds OCR information about an ASM instance to run under Oracle Clusterware (OC). This option also enables the resource. The ENABLE option enables an ASM instance to run under OC for automatic startup, or restart. The DISABLE option disables an ASM instance to prevent OC from automatically restarting the ASM instance in certain situations. DISABLE also prevents any startup of that ASM instance using SRVCTL. The START option starts an OC-enabled ASM instance. SRVCTL uses the SYSDBA connection to perform the operation. The STOP option stops an ASM instance by using the shutdown normal, transactional, immediate, or abort option. The CONFIG option displays the configuration information stored in the OCR for a particular ASM instance. The STATUS option obtains the current status of an ASM instance. The REMOVE option removes the configuration of an ASM instance. The MODIFY INSTANCE command can be used to establish a dependency between an ASM instance and a database instance. Note: Adding and enabling an ASM instance is automatically performed by the DBCA when creating the ASM instance. Oracle Database 10g: ASM Overview Seminar
99
ASM and SRVCTL with RAC: Examples
Start an ASM instance on the specified node: Stop an ASM instance on the specified node: Add OCR data about an existing ASM instance: Disable OC management of an ASM instance: $ srvctl start asm –n clusnode1 $ srvctl stop asm –n clusnode1 –o immediate $ srvctl add asm -n clusnode1 -i +ASM1 -o /ora/ora10 $ srvctl modify instance -d crm -i crm1 -s +asm1 $ srvctl disable asm –n clusnode1 –i +ASM1 ASM and SRVCTL with RAC (continued) Here are some examples: The first example starts up the only existing ASM instance on the CLUSNODE1 node. The –o option allows you to specify the mode in which you want to open the instance: open is the default, but you can also specify mount, or nomount. The second example is of an immediate shutdown of the only existing ASM instance on CLUSNODE1. The third example adds to the OCR the OC information for +ASM1 on CLUSNODE1. You need to specify the ORACLE_HOME of the instance. Although the following should not be necessary in case you use the DBCA, if you manually create ASM instances, you should also create OC dependency between database instances and ASM instances to ensure that the ASM instance starts up before starting the database instance, and to allow the database instances to be cleanly shut down before the ASM instances. To establish the dependency, you have to use a command similar to the following: srvctl modify instance -d crm -i crm1 -s +asm1 for each corresponding instance. The fourth example prevents OC to automatically restart +ASM1. Note: For more information, refer to the Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide. Oracle Database 10g: ASM Overview Seminar
100
ASM Disk Groups With EM in RAC
When you add a new disk group from an ASM instance, this disk group is not automatically mounted by other ASM instances. If you want to mount the newly added disk group on all ASM instances, for example, by using SQL*Plus, then you need to manually mount the disk group on each ASM instance. However, if you are using the Enterprise Manager (EM) to add a disk group, then the disk group definition includes a check box to indicate whether the disk group is automatically mounted by all the ASM clustered database instances. This is also true when you mount and dismount ASM disk groups by using Database Control where you can use a check box to indicate the instances that mount or dismount the ASM disk group. Oracle Database 10g: ASM Overview Seminar
101
Disk Group Performance Page and RAC
On the ASM Performance page, click the Disk Group I/O Cumulative Statistics link in the Additional Monitoring Links section. On the Disk Group I/O Cumulative Statistics page, click the corresponding disk group name. A performance page appears displaying the clusterwide performance information for the corresponding disk group. By clicking one of the proposed links—for example, I/O Throughput in the slide—you can see an instance-level performance details graph as shown at the bottom of the slide. Oracle Database 10g: ASM Overview Seminar
102
Oracle Database 10g: ASM Overview Seminar 1-102
Summary In this lesson, you should have learned to: Administer ASM with RAC Use SRVCTL for clustered ASM Understand ASM recovery Oracle Database 10g: ASM Overview Seminar
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.