Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oracle10g RAC Service Architecture

Similar presentations


Presentation on theme: "Oracle10g RAC Service Architecture"— Presentation transcript:

1 Oracle10g RAC Service Architecture
Overview of Real Application Cluster Ready Services, Nodeapps, and User Defined Services

2 Overview Service Architecture Cluster Ready Services (CRS) Nodeapps
User defined Services ASM Services Internally Managed Services Monitoring Services

3 RAC Service Architecture
Oracle 10g RAC features a service based architecture This is an improvement over 9i RAC in several ways Increased flexibility Increased manageability Improvements in High Availability Enables 10g Grid Deployment In general, Oracle 10g features increased usage of services to organize and manage Oracle functionality and associated application workloads. This is especially true for Real Application Clusters. With a service based architecture, Oracle RAC becomes more automated and self-managing than ever. In addition, a service based architecture makes it possible to implement Oracle 10g Grids.

4 RAC Services and High Availability
Oracle Services facilitate high availability of databases and related applications If key database resources become unavailable (network, storage, etc.): Instances and Services will be relocated to another node The “failed” node will be rebooted By default, after any server boot-up, Oracle attempts to restart all services on the node 10g RAC Cluster Ready Services make it much easier than previous versions to implement a High Availability architecture. By default, Cluster Ready Services will start or restart automatically on any node when possible, without administrative intervention. If it is not possible to start or continue services on a node, whether because of resource limitations or outright loss of a node, all services will be cleanly relocated to another available node in the cluster.

5 Cluster Ready Services
Manage the RAC Cluster Several Different Services OracleCRSService Oracle CSService OracleEVMService OraFenceService Required for RAC installation Installed in its own CRS_HOME Cluster Ready Services require a separate install before the main Oracle software installation. CRS is installed into a separate home directory from the standard Oracle Home directory. Once CRS is installed and services are active, it is possible to perform a full Real Application Clusters software installation.

6 CRS Basics Used to manage RAC Only one set of CRS Daemons per system
Multiple instances share the same CRS CRS runs as both root and Oracle users CRS must be running before RAC can start It is important to note that multiple databases can be run from a single RAC installation. In the case of multiple RAC databases, multiple unrelated instances will run on a single node. Each database can be managed separately. However, all instances on a single node will share a common set of CRS daemons.

7 CRS Management Started automatically Can stop and start manually
Start the OracleCRSService Stop the OracleCRSService Uses the voting disk and OCR (Oracle Cluster Repository) Requires 3 network addresses Public Private Virtual Public CRS can be started and stopped manually by accessing the OracleCRSService in Services. However, this is usually not necessary, due to the automatic behavior of CRS. CRS utilizes two types of “quorum” files on shared disk. These are the voting disk and the Oracle Cluster Repository. These are usually small files (< 1 GB), and are usually stored on raw disk. CRS requires three separate networks: a public interface on each node, a private interface on each node, and a virtual interface on each node. The public interface and virtual interface may optionally be placed on separate networks, or on the same subnet. However, the private interface must be completely separate from the other networks. It is a Best Practice to place the private interconnect on separate switches from the other networks. The private interconnect implements Cache Fusion over a Gigabit Ethernet network. Large volumes of messages may be sent across the private network. Since the UDP protocol is used, the private interconnect is extremely sensitive to collisions. This is why the private interconnect network must be isolated.

8 CRS Services OracleCRSService OracleCSService OracleEVMService
Cluster Ready Services Daemon OracleCSService Oracle Cluster Synchronization Service Daemon OracleEVMService Event Manager Daemon OraFenceService Process Monitor

9 Cluster Ready Services Daemon OracleCRSService
Runs as Administrator user Automatically restarted Manages Application Resources Starts, stops and fails-over application resources Maintains the OCR (Oracle Cluster Repository) Keeps state information in the OCR The Cluster Ready Services Daemon is the only RAC daemon that is run as root. It is responsible for maintaining basic state information in the Oracle Cluster Repository. The CRSD is responsible for rebooting failed cluster nodes, among other responsinilities.

10 Oracle Cluster Synchronization Service Daemon OracleCSService
Runs as Administrator user Maintains the heartbeat (failure causes system reboot) Provides Node Membership Group Access Basic Cluster Locking Can integrate with 3rd party clustering products or run standalone OracleCSService also works with non-RAC systems The Oracle Cluster Synchronization Service Daemon is responsible for maintaining the basic “heartbeat” of the cluster. Although the private interconnect is polled to determine system health, it is not the ultimate heartbeat mechanism. The “Voting Disk” on shared storage is used to hold current status information for each node. In this case, the heartbeat is transmitted over the Storage Area Network, usually implemented as the Fibre Channel protocol on fiber optic cables. In addition to providing the heartbeat mechanism, OracleCSService is responsible for node membership, group access, and basic cluster locking.

11 Event Manager Daemon OracleEVMService
Runs as Administrator user Restarts on failure Generates Events Starts the racgevt thread to invokes Server Callouts The Event Manager Daemon is responsible for generating messages when significant events occur. In addition to internal use of event messages, event messages may be accessed for programmatic use ny making use of Server Callouts in scripts and programs.

12 Process Monitor OraFenceService
Runs as Administrator user Locked in memory to monitor the cluster Provides I/O fencing OraFenceService periodically monitors cluster status, and can reboot the node if a problem is detected An OraFenceService failure results in Oracle Clusterware restarting the node The OraFenceService performs its check, stops running, and if the wake up is beyond the expected time, then OraFenceService resets the processor and reboots the node.

13 RACG RACG is a behind-the-scenes process (or thread) that extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur. Runs as processes (or threads), not as a service (racgmain.exe, racgimon.exe)

14 Cluster Ready Services Management
Log Files OracleCRSService %ORA_CRS_HOME%\log\hostname\crsd Oracle Cluster Registry (OCR) %ORA_CRS_HOME%log\hostname\ocr OracleEVMService %ORA_CRS_HOME%\log\hostname\evmd OracleCSService %ORA_CRS_HOME%log\hostname\cssd RACG %ORA_CRS_HOME%log\hostname\racg

15 Nodeapp Services Nodeapps are a standard set of Oracle application services that are automatically launched for RAC Virtual IP (VIP) Oracle Net Listener Global Services Daemon (GSD) Oracle Notification Service (ONS) Nodeapp services run on each node Can be relocated to other nodes through the virtual IP Nodeapps are a set of Oracle applications that run automatically on each node. These include applications that were also significant with 9i RAC, including the Oracle Net Listener and the Global Services Daemon. In addition, the Oracle Notification Service and the Virtual IP service are also run as nodeapps. The VIP service is especially important. It allows services to be rapidly relocated to other nodes.

16 VIP (Virtual IP) Creates a virtual IP address used by the Listener
The virtual IP address fails over between nodes Multiple virtual IP addresses can exist on the same system (during failover) Independent of the Oracle Instance Potential Problem if more than one database per node The VIP service is used to offer a consistent IP address for Oracle Net communications and applications connecting to the database, even if all services are relocated to another node. In case of a service relocation, multiple virtual IPs may be hosted on a single node. When multiple databases are hosted on a single RAC system, relocating the VIP service to move the instance and service resources for one database will also cause all other RAC instances and services hosted on the node to also be relocated.

17 Global Services Daemon (GSD)
The daemon which executes SRVCTL commands GSD receives requests from SRVCTL to execute administrative tasks, such as startup or shutdown The command is executed locally on each node, and the results are sent back to SRVCTL. The daemon is installed on the nodes by default. It is important that you do not kill this process and it should not be deleted. The Global Services Daemon is used to execute administrative tasks for RAC. An Oracle Database Administrator may use srvctl commands to interface with the GSD. The GSD is a required component of RAC Nodeapps, and is installed by default on each node. As of Oracle 10g, it is no longer necessary to use the gsdctl utility to start and stop the Global Services Daemon. It is preferred to use srvctl to start and stop all nodeapps.

18 Listener Server-side component of Oracle Net
Listens for incoming client connection requests Manages the traffic to the server; when a client requests a network session with a server, the listener actually receives the request and brokers the client request If the client's information matches the listener's information, then the listener grants a connection to the server. The listener is a process that resides on the server whose responsibility is to listen for incoming client connection requests and manage the traffic to the server. The listener brokers the client request, handing off the request to the server. Every time a client (or server acting as a client) requests a network session with a server, a listener receives the actual request. If the client's information matches the listener's information, then the listener grants a connection to the server. As of Oracle 10g, it is no longer necessary to use the lsnrctl utility to start and stop the listener. It is preferred to use srvctl to start and stop all nodeapps.

19 Oracle Notification Service (ONS)
The Oracle Notification Service is installed automatically on each RAC node as a Node Application ONS starts automatically with each boot ONS uses a simple push/subscribe method to publish event messages to all RAC nodes with active ONS daemons The Oracle Notification service extends the functionality of the Event Manager Daemon to push event messages to multiple targets. This includes all other RAC nodes running ONS. In addition, mid-tier nodes may also run ONS, in order to receive and respond to event messages. This enables the Fast Application Notification system, which can be utilized for high availability and load balancing of mid-tier applications.

20 ONS and Fast Application Notification
ONS can be configured to run on nodes hosting client or mid-tier applications ONS is the key component of Fast Application Notification (FAN) Can be utilized to extend RAC high availability and load balancing to mid-tier applications Independent of True Application Failover Less reliance on network configuration

21 User Defined Services User defined, named services may be created to manage database resources that are associated with application workloads One or more database instances may be mapped to a single service A database instance may be assigned to one or more services The Automated Workload Repository may be used to monitor Service metrics

22 User Defined Services and Failover
Services can be defined with preferred and alternate instances A service may be assigned to start on preferred instances The same service may have alternate instances assigned for failover If multiple services are assigned for the same database, the preferred and alternate instance assignments may be different for each service User defined services can be defined to manage application workloads. For example, a service may be assigned for an ERP application and a separate service may be assigned for a Reporting application, both working on the same four node cluster. Two nodes could be assigned as the primary nodes for the ERP application, with the other two nodes as the primary nodes for the Reporting application. The other two nodes for each application may be defined as Alternate nodes for that application workload, to be used in case of failover.

23 Automatic Storage Management Services
Automatic Storage Management (ASM) is a storage option for creating and managing databases ASM operates like a Logical Volume Manager between the physical storage and the database. A small, automatically managed Oracle database instance is created on each node (if ASM is chosen as a storage option) ASM instances start automatically as Oracle services Automatic Storage management is one option for configuring Oracle Storage. ASM acts like a Logical Volume Manager (LVM) that sits between the physical storage and the database. ASM automates all file creation, file naming, and file placement operations. ASM may be used to perform data striping and disk mirroring, if desired. ASM is implemented as a small Oracle instance on each RAC node. The ASM instances start automatically as servcies, and can be managed with the same tools as all other services (i.e. srvctl).

24 Internally Managed Services
When the Global Services Daemon is started as a part of the Node Applications, it in turn launches key internally managed services The Global Cache Service manages Cache Fusion and in-memory data buffers The Global Enqueue Service manages inter-instance locking and RAC recovery GCS and GES show up as OS processes or threads, but GSD is the only service that can be externally controlled GCS and GES together manage a set of “virtual” tables in memory, called the Global Resouce Directory

25 Global Cache Service (GCS)
The controlling process that implements Cache Fusion. Manages the status and transfer of data blocks across the buffer caches of all instances. Tightly integrated with the buffer cache manager to enable fast lookup of resource information in the Global Resource Directory. Maintains the block mode for blocks in the global role. Employs various background processes (or threads) such as the Global Cache Service Processes (LMSn) and Global Enqueue Service Daemon (LMD). This directory is distributed across all instances and maintains the status information about resources including any data blocks that require global coordination.

26 Global Enqueue Service Monitor (LMON)
Background process that monitors the entire cluster to manage global resources. Manages instance and process expirations and recovery for GCS. Handles the part of recovery associated with global resources Note: LMON-provided services are also known as cluster group services (CGS). Note: In a cluster database, local locks are called local enqueues.

27 Global Resource Directory
The data structures associated with global resources. It is distributed across all instances in a cluster. Global Cache Service and Global Enqueue Service maintain the Global Resource Directory to record information about resources and enqueues held globally. The Global Resource Directory resides in memory and is distributed throughout the cluster to all nodes. In this distributed architecture, each node participates in managing global resources and manages a portion of the Global Resource Directory. Note: Enqueues are shared memory structures that serialize access to database resources. Enqueues are local to one instance if Real Application Clusters is not enabled. When you enable RAC, enqueues can be global to a database. The Global Resource Directory is like a “vitual” set of tables that reside only in memory.

28 Monitoring RAC Services
%ORA_CRS_HOME%\bin\crs_stat NAME=ora.rac1.gsd TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac1.oem NAME=ora.rac1.ons Example crs_stat output: NAME=ora.rac1.gsd TYPE=application TARGET=ONLINE STATE=ONLINE NAME=ora.rac1.oem NAME=ora.rac1.ons NAME=ora.rac1.vip NAME=ora.rac2.gsd NAME=ora.rac2.oem NAME=ora.rac2.ons NAME=ora.rac2.vip

29 Monitoring RAC Services
Creating a tabular report: %ORA_CRS_HOME%\bin\crs_stat -t Name Type Target State Host ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.oem application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.oem application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2 The crs_stat –t option provides output in a tabular format.

30 Review What advantages does a service based architecture offer?
What four services comprise Cluster Ready Services? Nodeapps consists of which four applications? True or False: a database instance may be assigned to multiple services

31 Summary Service Architecture Cluster Ready Services (CRS) Nodeapps
OracleCRSService OracleCSService OracleEVMService OraFenceService Nodeapps VIP Listener GSD ONS User defined Services ASM Services Internally managed services Global Cache Service Global Enqueue Service Global Resource Directory Monitoring Services


Download ppt "Oracle10g RAC Service Architecture"

Similar presentations


Ads by Google