Download presentation
Presentation is loading. Please wait.
1
Introduction
2
Overview This course focuses on those features of Oracle Database 11g that are applicable to database administration. Previous experience with Oracle databases (particularly Oracle Database 10g) is required for a full understanding of many of the new features. Hands-on practices emphasize functionality rather than test knowledge. Overview This course is designed to introduce you to the new features of Oracle Database 11g that are applicable to the work usually performed by database administrators and related personnel. The course does not attempt to provide every detail about a feature or cover aspects of a feature that were available in previous releases (except when defining the context for a new feature or comparing past behavior with current behavior). Consequently, the course is most useful to you if you have already administered other versions of Oracle databases, particularly Oracle Database 10g. Even with this background, you should not expect to be able to implement all of the features discussed in the course without supplemental reading, especially the Oracle Database 11g documentation. The course consists of instructor-led lessons and demonstrations, plus many hands-on practices and demos that enable you to see for yourself how certain new features behave. As with the course content in general, these practices are designed to introduce you to the fundamental aspects of a feature. They are not intended to test your knowledge of unfamiliar syntax or to provide an opportunity for you to examine every nuance of a new feature. The length of this course precludes such activity. Consequently, you are strongly encouraged to use the provided scripts to complete the practices rather than struggle with unfamiliar syntax.
3
Oracle Database Innovation
Audit Vault Database Vault Secure Enterprise Search Grid Computing Automatic Storage Mgmt Self Managing Database XML Database Oracle Data Guard Real Application Clusters Flashback Query Virtual Private Database Built-in Java VM Partitioning Support Built-in Messaging Object Relational Support Multimedia Support Data Warehousing Optimizations Parallel Operations Distributed SQL & Transaction Support Cluster and MPP Support Multi-version Read Consistency Client/Server Support Platform Portability Commercial SQL Implementation of sustained innovation… Oracle Database Innovation As a result of its early focus on innovation, Oracle has maintained the lead in the industry with a large number of trend-setting products. Continued emphasis on Oracle’s key development areas has led to a number of industry firsts—from the first commercial relational database, to the first portable tool set and UNIX-based client/server applications, to the first multimedia database architecture. … continuing with Oracle Database 11g
4
Enterprise Grid Computing
SMP dominance RAC clusters for availability Managing change across the enterprise Grids of low-cost hardware and storage Enterprise Grid Computing Oracle Database 10g was the first database designed for grid computing. Oracle Database 11g consolidates and extends Oracle’s unique ability to deliver the benefits of grid computing. Oracle Infrastructure grids fundamentally changed the way data centers look and operate, transforming data centers from silos of isolated system resources to shared pools of servers and storage. Oracle’s unique grid architecture enables all types of applications to scale-out server and storage capacity on demand. By clustering low-cost commodity server and storage modules on Infrastructure grids, Oracle Database 11g enables customers to improve their user service levels, reduce their down time, and make more efficient use of their IT resources while still increasing the performance, scalability, and security of their business applications. Oracle Database 11g furthers the adoption of grid computing by offering: Unique scale-out technology with a single database image Lower server and storage costs Increased availability and scalability
5
Oracle Database 11g: Focus Areas
Manageability Availability Performance Business intelligence and data warehousing Security Oracle Database 11g: Focus Areas The Oracle Infrastructure grid technology enables information technology systems to be built out of pools of low-cost servers and storage that deliver the highest quality of service in terms of manageability, high availability, and performance. With Oracle Database 11g, the existing grid capabilities are extended in the areas listed in the slide, thereby making your databases more manageable. Manageability: New manageability features and enhancements increase DBA productivity, reduce costs, minimize errors, and maximize quality of service through change management, additional management automation, and fault diagnosis. Availability: New high-availability features further reduce the risk of down time and data loss, including further disaster recovery offerings, important high-availability enhancements to Automatic Storage Management, support for online database patching, improved online operations, and more. Performance: Many innovative new performance capabilities are available, including SecureFiles, compression for OLTP, Real Application Clusters optimizations, Result Query Caches, TimesTen enhancements, and more.
6
Oracle Database 11g: Focus Areas
Information management Content management XML Oracle Text Spatial Multimedia and medical imaging Application development PL/SQL .NET PHP SQL Developer Oracle Database 11g: Focus Areas (continued) The Oracle Infrastructure grid provides the additional functionality required to manage all the information in the enterprise with robust security, information lifecycle management, and integrated business intelligence analytics to support fast and accurate business decisions at the lowest cost.
7
Management Automation
Auto-tuning Advisory Schema Storage Backup Memory Apps/SQL RAC Recovery Replication Instrumentation Management Automation Oracle Database 11g continues the effort begun in Oracle9i Database and carried on through Oracle Database 10g to dramatically simplify and ultimately, fully automate the tasks that DBAs must perform. What is new in Oracle Database 11g is Automatic SQL Tuning with self-learning capabilities. Other new capabilities include automatic, unified tuning of both SGA and PGA memory buffers, and new advisors for partitioning, database repair, streams performance, and space management. Enhancements to Oracle Automatic Database Diagnostic Monitor (ADDM) give it a better global view of performance in Oracle Real Application Clusters (RAC) environments and improved comparative performance analysis capabilities.
8
Self-Managing Database: The Next Generation
Manage performance and resources Manage change Manage fault Self-Managing Database: The Next Generation Self-management is an ongoing goal for Oracle Database. Oracle Database 10g marked the beginning of a major effort to make the database easy to use. With Oracle Database 10g, the focus for self-managing was on performance and resources. Oracle Database 11g adds two important axes to the overall self-management goal: change management and fault management.
9
Suggested Additional Courses
Oracle Database 11g: Real Application Clusters Oracle Database 11g: Data Guard Administration Oracle Enterprise Manager 11g Grid Control Suggested Additional Courses For more information about the key grid computing technologies used by the Oracle products, you can obtain additional training from Oracle University (courses listed in the slide).
10
Further Information For more information about topics that are not covered in this course, refer to the following: Oracle Database 11g: New Features Overview Seminar Oracle Database 11g: New Features eStudies A comprehensive series of self-paced online courses covering all new features in detail Oracle By Example series: Oracle Database 11g Oracle OpenWorld events
11
Suggested Schedule I, 1 Topic Lessons Day Installation & Upgrade 1
Manage Storage 2 1 Manage Change 3, 4, 5 2 Manage Performance & Resources 6, 7, 8, 9 3 Manage Availability 10, 11, 12, 13 4 Manage Security 14, 15 5 Miscellaneous 16 5 Suggested Schedule The lessons in this guide are arranged in the order in which you will probably study them in the class. The lessons are grouped into topic areas, but they are also organized by other criteria, including the following: A feature is introduced in an early lesson and then referenced in later lessons. Topics alternate between difficult and easy to facilitate learning. Lessons are supplemented with hands-on practices throughout the course to provide regular opportunities for you to explore what you are learning. If your instructor teaches the class in the sequence in which the lessons are printed in this guide, the class should run approximately as shown in the schedule. Your instructor, however, may vary the sequence of the lessons for a number of reasons, including: Customizing material for a specific audience Covering a topic in a single day instead of splitting the material across two days Maximizing the use of course resources (such as hardware and software)
12
Installation and Upgrade Enhancements
13
Objectives After completing this lesson, you should be able to:
Install Oracle Database 11g Upgrade your database to Oracle Database 11g Use hot patching
14
Oracle Database 11g Installation: Changes
Minor modifications to the install flow, with new screens for: Turning off secure configuration in the seed database Setting the out-of-box memory target Specifying the database character set Modifications to OS authentication to support SYSASM Addition of new products to the install: Oracle Application Express Oracle Configuration Manager (OCM) SQL Developer Warehouse Builder (server-side pieces) Oracle Database Vault Oracle Database 11g Installation: Changes During an Oracle Database 11g installation, you will notice several changes to the installation process. These changes include standard database out-of-box memory calculations, character set determination, support for the new SYSASM role, and a corresponding operating system privileges group (OSASM) that is intended to secure privileges to perform ASM administration tasks. The following are the new components that are available when you install Oracle Database 11g: Oracle Application Express is installed with Oracle Database 11g. It was previously named HTML DB and was available as a separate companion CD component. Oracle Configuration Manager is offered during installation. It was previously named Customer Configuration Repository (CCR). It is an optional component for database installation. Oracle Configuration Manager gathers and stores details relating to the configuration of the software stored in Oracle Database home directories. Oracle SQL Developer is installed by default with template-based database installations, such as General Purpose/Transaction Processing and Data Warehousing. It is also installed with database client Administrator, Runtime, and Custom installations. Oracle Warehouse Builder is installed with Oracle Database 11g. Oracle Database Vault is installed with Oracle Database 11g. It is an optional component for database installation. It was previously available as a separate companion CD component.
15
Oracle Database 11g Installation: Changes
Move to JDK/JRE 1.5 Removal of certain products and features from the installation: OEM Java Console Raw storage support for data files (installer only) Oracle Data Mining Scoring Engine Oracle Workflow iSQL*Plus Addition of new prerequisite checks (largely feedback from SAP and various other bugs) Changes to the default file permissions Oracle Database 11g Installation: Changes (continued) The following components are part of Oracle Database 10g, Release 2 (10.2) but are not available for installation with Oracle Database 11g: iSQL*Plus Oracle Workflow Oracle Data Mining Scoring Engine Oracle Enterprise Manager Java Console
16
Oracle Database 11g Installation: Changes
Minor changes to the clusterware installation: Support for block devices for storage of OCR and Voting Disks Support for upgrade of XE databases directly to Oracle Database 11g Better conformance to OFA in the installation: Prompt for ORACLE_BASE explicitly Warnings in the alert log when ORACLE_BASE is not set Oracle Database 11g Installation: Changes (continued) In Oracle Database 11g, Oracle Universal Installer (OUI) prompts you to specify the Oracle base. The Oracle base that you provide during installation is logged in the local inventory. You can share this Oracle base across all of the Oracle homes that you create on the system. Oracle recommends that you share an Oracle base for all of the Oracle homes created by a user. Oracle Universal Installer has a list box in which you can edit or select the Oracle base. The installer derives the default Oracle home from the Oracle base location that you provide in the list box. However, you can change the default Oracle home by editing the location. The following are changes made in Oracle Database 11g with respect to the Oracle base to make it compliant with Optimal Flexible Architecture (OFA): ORACLE_BASE is a recommended environment variable. (This variable will be mandatory in future releases.) By default, the Oracle base and the Oracle Clusterware home are at the same directory level during Oracle Clusterware installation. You should not create an Oracle Clusterware home under the Oracle base. Because, specifying an Oracle Clusterware home under the Oracle base results in an error. Oracle recommends that you create the flash recovery area and the data file location under the Oracle base.
17
Notes only page Oracle Database 11g Installation Changes (continued)
In Oracle Database 10g, the default flash recovery area and the data file location are one level above the Oracle home directory. However, in Oracle Database 11g, the Oracle base is the starting point to set the default flash recovery area and the data file location. Nevertheless, Oracle recommends that you keep the flash recovery area and data file location on separate disks. Notes only page
18
Oracle Database 11g: Software Installation
The Oracle Database 11g software installation is a straightforward process. As the operating system oracle user, change directory to Disk1 on your installation media or software staging directory, and then start Oracle Universal Installer (OUI): $ ./runInstaller Select Oracle Database 11g on the “Select a Product to Install” page and click Next. On the Select Installation Method page, provide the Oracle base (if the ORACLE_BASE parameter has not been set) and Oracle home locations. The default installation type and UNIX DBA group values are Enterprise Edition and dba, respectively. You must use the Advanced/Custom installation path to install many of the Enterprise Edition options (such as Real Application Testing). Starting with Oracle Database 11g, OUI tries to install its inventory in $ORACLE_BASE/.. As a result, you must ensure that $ORACLE_BASE/.. is writable by any user installing the Oracle software. This translates to the following (as root user): # mkdir -p /u01/app # chgrp oinstall /u01/app # chmod 775 /u01/app However, if you are running previous versions of the Oracle software, OUI uses the inventory that already exists.
19
Oracle Database 11g: Software Installation
Oracle Database 11g: Software Installation (continued) On the Product-Specific Prerequisite Checks page, your system is examined for supported operating systems and kernels, required RPMs, Oracle environment consistencies, and so on. If no discrepancies are discovered, proceed with the installation by clicking Next. Review the information on the Summary page, and then click Install.
20
Practice 1-1: Overview In this practice, you install the Oracle Database 11g software.
21
Oracle Database Upgrade Enhancements
Pre-Upgrade Information Tool Simplified upgrade Upgrade performance enhancement Post-Upgrade Status Tool Oracle Database Upgrade Enhancements Oracle Database 11g, Release 1 continues to make improvements to simplify manual upgrades, upgrades performed using the Database Upgrade Assistant (DBUA), and downgrades. The DBUA provides the following enhancements for single-instance databases: Support for improvements to the pre-upgrade tool in the areas of space estimation, initialization parameters, statistics gathering, and new warnings The catupgrd.sql script performs all upgrades and the catdwgrd.sql script performs all downgrades, for both patch releases and major releases. The DBUA can automatically take into account multi-CPU systems to perform parallel object recompilation. Errors are now collected as they are generated during the upgrade and displayed by the Post-Upgrade Status Tool for each component.
22
Pre-Upgrade Information Tool
SQL script (utlu111i.sql) analyzes the database to be upgraded. Checks for parameter settings that may cause upgrade to fail and generates warnings Utility runs in “old server” and “old database” context. Provides guidance and warnings based on the Oracle Database 11g, Release 1 upgrade requirements Supplies information to the DBUA to automatically perform required actions Pre-Upgrade Information Tool The Pre-Upgrade Information Tool analyzes the database to be upgraded. It is a SQL script that ships with Oracle Database 11g, Release 1 and must be run in the environment of the database being upgraded. This tool displays warnings about possible upgrade issues with the database. It also displays information about the required initialization parameters for Oracle Database 11g, Release 1.
23
Pre-Upgrade Analysis The Pre-Upgrade Information Tool checks for:
Database version and compatibility Redo log size Updated initialization parameters (for example, SHARED_POOL_SIZE) Deprecated and obsolete initialization parameters Components in database (JAVAVM, Spatial, and so on) Tablespace estimates Increase in total size Additional allocation for AUTOEXTEND ON SYSAUX tablespace Pre-Upgrade Information Analysis After installing Oracle Database 11g, you should analyze your database before upgrading it to the new release. This is done by running the Pre-Upgrade Information Tool. This is a necessary step if you are upgrading manually. It is also recommended if you are upgrading with the DBUA so that you can preview the items that the DBUA checks. The Pre-Upgrade Information Tool is a SQL script that ships with Oracle Database 11g and must be copied to and run from the environment of the database being upgraded. To run the tool, follow these steps: 1. Log in to the system as the owner of the Oracle Database 11g Oracle home directory. 2. Copy the Pre-Upgrade Information Tool (utlu111i.sql) from the Oracle Database 11g ORACLE_HOME/rdbms/admin directory to a directory outside of Oracle home (such as /tmp). 3. Log in to the system as the owner of the Oracle home directory of the database to be upgraded. 4. Change to the directory that you copied the files to. Then start SQL*Plus. 5. Connect to the database instance as a user with SYSDBA privileges. 6. Set the system to spool results to a log file for later analysis: SQL> SPOOL upgrade_info.log 7. Run the Pre-Upgrade Information Tool: 8. Turn off the spooling of the script results to the log file and check the output of the Pre-Upgrade Information Tool in upgrade_info.log.
24
Practice 1-2: Overview In the first part of this practice, you conduct a pre-upgrade analysis.
25
STARTUP UPGRADE STARTUP UPGRADE suppresses normal upgrade errors:
Only real errors are spooled. Automatically handles setting of system parameters that can otherwise cause problems during upgrade Turns off job queues Disables system triggers Allows AS SYSDBA connections only Replaces STARTUP MIGRATE in Oracle Database 9i, Release 2 To upgrade, start up the database as follows: STARTUP UPGRADE STARTUP UPGRADE enables you to open a database based on an earlier Oracle Database release. It also restricts logons to AS SYSDBA sessions, disables system triggers, and performs additional operations that prepare the environment for the upgrade (some of which are listed in the slide). To upgrade the candidate database, follow the directions below: 1. Set the ORACLE_SID correctly. The oratab file should point to your Oracle Database 11g Oracle home. The following environment variables point to the Oracle Database 11g directories: ORACLE_HOME PATH 2. Log in to the system as the owner of the Oracle Database 11g Oracle home directory. 3. At system prompt, change to the ORACLE_HOME/rdbms/admin directory and SQL*Plus. 4. Connect to the database instance as a user with SYSDBA privileges. 5. Start up the instance by issuing the following command: SQL> STARTUP UPGRADE SQL> STARTUP UPGRADE
26
Upgrade Performance Enhancement
Parallel recompilation of invalid PL/SQL database objects on multiprocessor CPUs: utlrp.sql can now exploit multiple CPUs to speed up the time required to recompile any stored PL/SQL and Java code. UTL_RECOMP automatically determines the level of parallelism based on CPU_COUNT and PARALLEL_THREADS_PER_CPU. Upgrade Performance Enhancement The script is a wrapper based on the UTL_RECOMP package. UTL_RECOMP provides a more general recompilation interface, including options to recompile objects in a single schema. For details, see the documentation for UTL_RECOMP. By default, this script invokes the utlprp.sql script with 0 as the degree of parallelism for recompilation. This means that UTL_RECOMP automatically determines the appropriate level of parallelism based on the Oracle CPU_COUNT and PARALLEL_THREADS_PER_CPU parameters. If the parameter is set to 1, sequential recompilation is used.
27
Post-Upgrade Status Tool
Run utlu111s.sql to display the results of the upgrade: Error logging now provides more information per component. Reviews the status of each component and lists the elapsed time Provides information about invalid or incorrect component upgrades Run this tool after the upgrade completes, to see errors and check the status of the components. Post-Upgrade Status Tool The Post-Upgrade Status Tool provides a summary of the upgrade at the end of the spool log. It displays the status of the database components in the upgraded database and the time required to complete each component upgrade. Any errors that occur during the upgrade are listed with each component and must be addressed. Run utlu111s.sql to display the results of the upgrade.
28
Rerun the Upgrade Oracle Database 11.1 Upgrade Status Utility :48:55 Component Status Version HH:MM:SS Oracle Server VALID 00:19:31 JServer JAVA Virtual Machine VALID 00:03:32 Oracle Workspace Manager VALID 00:01:02 Oracle Enterprise Manager VALID 00:12:02 Oracle XDK VALID 00:00:42 Oracle Text VALID 00:01:02 Oracle XML Database VALID 00:04:24 Oracle Database Java Packages VALID 00:00:27 Oracle interMedia VALID 00:05:44 Spatial ORA-04031: unable to allocate 4096 bytes of shared memory ("java pool","java/awt/FrameSYS","joxlod exec hp",":SGAClass") ORA-06512: at "SYS.DBMS_JAVA", line 704 INVALID 00:09:24 Total Upgrade Time: 02:08:24 Rerun the Upgrade The Post-Upgrade Status Tool should report a status of VALID for all components at the end of the upgrade. Other Status Values As shown in the slide, the report returns INVALID for the Spatial component. This is because of the ORA error. In this case, you should fix the problem; subsequently running utlrp.sql might change the status to VALID without rerunning the entire upgrade. Check the DBA_REGISTRY view after running utlrp.sql. If that does not fix the problem, or if you see a status of UPGRADING, the component upgrade did not complete. Resolve the problem and rerun catupgrd.sql after you shut down, immediately followed by a startup upgrade.
29
Upgrade Process
30
Prepare to Upgrade Become familiar with the features of Oracle Database 11g, Release 1. Determine the upgrade path. Choose an upgrade method. Choose an OFA-compliant Oracle home directory. Prepare a backup and recovery strategy. Develop a test plan to test your database, applications, and reports.
31
Oracle Database 11g Release 1: Upgrade Paths
Direct upgrade to 11g is supported from or higher, or higher, and or higher. If you are not at one of these versions, you need to perform a "double-hop" upgrade. Examples Oracle Database 11g Release 1 Upgrade Paths The path that you must take to upgrade to Oracle Database 11g, Release 1 depends on the release number of your current database. It might not be possible to upgrade directly from the current version of the Oracle Database to the latest version. Depending on your current release, you might be required to upgrade through one or more intermediate releases to upgrade to Oracle Database 11g, Release 1. For example, if the current database is running release 8.1.6, follow these steps: 1. Upgrade release to release by using the instructions in the Oracle8i Database Migration, Release 3 (8.1.7). 2. Upgrade release to release by using the instructions in the Oracle9i Database Migration, Release 2 (9.2). 3. Upgrade release to Oracle Database 11g, Release 1 by using the instructions in this lesson.
32
Choose an Upgrade Method
Database Upgrade Assistant (DBUA) Automated GUI tool that interactively steps the user through the upgrade process and configures the database to run with Oracle Database 11g, Release 1 Manual upgrade Use SQL*Plus to perform any necessary actions to prepare for the upgrade, run the upgrade scripts, and analyze the upgrade results. Export and Import utilities Use Data Pump or original Export/Import. CREATE TABLE AS SELECT statement Choose an Upgrade Method Oracle Database 11g, Release 1 supports the following tools and methods for upgrading a database to the new release: Database Upgrade Assistant (DBUA) provides a graphical user interface (GUI) that guides you through the upgrade of a database. The DBUA can be launched during installation with Oracle Universal Installer, or you can launch the DBUA as a stand-alone tool at any time in the future. The DBUA is the recommended method for performing a major release upgrade or patch release upgrade. Manual upgrade can be performed using SQL scripts and utilities to provide a command-line upgrade of a database. Export and Import utilities: Use the Oracle Data Pump Export and Import utilities, available as of Oracle Database 10g, Release 1 (10.1), or the original Export and Import utilities to perform a full or partial export from your database, followed by a full or partial import into a new Oracle Database 11g, Release 1 database. Export/Import can copy a subset of the data, leaving the database unchanged. CREATE TABLE AS SELECT statement copies data from a database into a new Oracle Database 11g, Release 1 database. Data copying can copy a subset of the data, leaving the database unchanged.
33
Database Upgrade Assistant: Advantages and Disadvantages
Automates all tasks Performs both release and patch set upgrades Supports RAC, Single Instance, and ASM Informs the user and fixes upgrade prerequisites Automatically reports errors found in spool logs Provides complete HTML report of the upgrade process Command-line interface allows ISVs to automate Disadvantages Offers less control over individual upgrade steps Database Upgrade Assistant: Advantages and Disadvantages The Database Upgrade Assistant (DBUA) guides you through the upgrade process and configures a database for the new release. The DBUA automates the upgrade process and makes appropriate recommendations for configuration options such as tablespaces and redo logs. While the upgrade is running, the DBUA shows the upgrade progress for each component. The DBUA writes detailed trace and log files, and produces a complete HTML report for later reference. To enhance security, the DBUA automatically locks new user accounts in the upgraded database. The DBUA then proceeds to create new configuration files (initialization parameter and listener files) in the new Oracle home. If the DBA requires more control over the individual steps in the upgrade process, a manual upgrade is still possible. Usually, however, the manual upgrade method is more error prone, is harder to automate, and involves a greater amount of work than upgrading with the DBUA.
34
Sample Test Plan Use Enterprise Manager to make a clone of your production system. Upgrade the test database to the latest version. Update COMPATIBLE to the latest version. Run your applications, reports, and legacy systems. Ensure adequate performance by comparing metrics gathered before and after the upgrade. Tune queries or problem SQL statements. Update any necessary database parameters. Sample Test Plan A series of carefully designed tests is required to validate all stages of the upgrade process. Executed rigorously and completed successfully, these tests ensure that the process of upgrading the production database is well understood, predictable, and successful. Perform as much testing as possible before upgrading the production database. Do not underestimate the importance of a test program. Testing the upgraded database is just as important as testing the upgrade process. Test the newly upgraded test database with existing applications to verify that they operate properly with a new Oracle database. You might also want to test enhanced functions by adding the available Oracle Database features. However, first make sure that the applications operate in the same manner as they did in the current database.
35
Database Upgrade Assistant
36
Database Upgrade Assistant (DBUA)
The DBUA is a GUI and command-line tool for performing database upgrades. Uses a wizard interface: Automates the upgrade process Simplifies detecting and handling of upgrade issues Supported releases for 11g: 9.2, 10.1 and 10.2 Patchset upgrades: Supported: and later Support the following database types: Single instance Real Application Clusters Automatic Storage Management Database Upgrade Assistant (DBUA) The Database Upgrade Assistant (DBUA) guides you through the upgrade process and configures a database for the new release. The DBUA automates the upgrade process and makes appropriate recommendations for configuration options such as tablespaces and redo logs. The DBUA can be used to upgrade databases that were created with any edition of the Oracle Database software, including the Express Edition (XE) databases. The DBUA supports the following versions of Oracle Database for upgrading to Oracle Database 11g, Release 1: Oracle9i, Release 2 ( ) and later 9i releases Oracle Database 10g, Release 1 (10.1) Oracle Database 10g, Release 2 (10.2) If your database version is not in this list, you must first upgrade to the closest release listed and then upgrade to Oracle Database 11g, Release 1. The Database Upgrade Assistant is fully compliant with Oracle RAC environments. In RAC environments, the DBUA upgrades all database and configuration files on all nodes in the cluster. The DBUA supports upgrades of databases that use Automatic Storage Management (ASM). If an ASM instance is detected, you have the choice of updating both the database and the ASM, or updating only the ASM instance. The Database Upgrade Assistant supports a silent mode of operation in which no user interface is presented to the user. Silent mode enables you to use a single statement for the upgrade.
37
Key DBUA Features Recoverability
Performs a backup of the database before upgrade Can restore the database after upgrade (if needed) Runs all necessary scripts to perform the upgrade Displays upgrade progress at a component level Configuration checks Automatically makes appropriate adjustments to the initialization parameters Checks for adequate resources such as SYSTEM tablespace size, rollback segment size, and redo log size Checks disk space for auto-extended data files Creates mandatory SYSAUX tablespace Key DBUA Features Before starting the upgrade process, Oracle Corporation strongly recommends that you back up your existing database, although the DBUA can perform a backup during the pre-upgrade stage. If you use the DBUA to back up your database, it creates a copy of all your database files in the directory that you specify. The DBUA performs this cold backup automatically after it shuts down the database and before it begins performing the upgrade procedure. However, the cold backup does not compress your database files, and the backup directory must be a valid file system path. In addition, the DBUA creates a batch file in the specified directory. You can use this batch file to restore the database files if needed. During the upgrade, the DBUA automatically modifies or creates new required tablespaces and invokes the appropriate upgrade scripts. While the upgrade is running, DBUA shows the upgrade progress for each component. The DBUA then creates new configuration files (parameter and listener files) in the new Oracle home. The DBUA performs the following checks before the upgrade: Invalid user accounts or roles Invalid data types or invalid objects De-supported character sets Adequate resources (including rollback segments, tablespaces, and free disk space) Missing SQL scripts needed for the upgrade Listener running (if Oracle EM Database Control upgrade or configuration is requested) The DBUA provides a comprehensive summary of the pre-upgrade checks when finished.
38
Key DBUA Features Configuration files Oracle Enterprise Manager
Creates init.ora and spfile in the new ORACLE_HOME Updates network configurations Uses OFA-compliant locations Updates database information on OID Oracle Enterprise Manager DBCA allows you to set up and configure EM DB Control. DBCA allows you to register a database with EM Grid Control. If EM is in use, DBCA enables you to upgrade the EM catalog and make the necessary configuration changes. Logging and tracing Writes detailed trace and logging files (ORACLE_BASE/cfgtoollogs/dbua/<sid>/upgradeNN) Key DBUA Features (continued) During the upgrade, the DBUA automatically modifies or creates new init.ora and spfile in the new ORACLE_HOME directory. In addition, the DBUA updates the network configurations, creates the required tablespaces, and invokes the appropriate upgrade scripts. While the upgrade is running, the DBUA shows the upgrade progress for each component. The DBUA writes detailed trace and log files in $ORACLE_BASE/cfgtoollogs/dbua/<sid>/upgradeNN and produces a complete HTML report for later reference.
39
Key DBUA Features Minimizing down time Security features
Speeds up upgrade by disabling archiving Recompiles packages in parallel Does not require user interaction after upgrade starts Security features Locks new users in the upgraded database Real Application Clusters Upgrades all nodes Upgrades all configuration files Key DBUA Features (continued) After completing the pre-upgrade steps, the DBUA automatically archives redo logs and disables archiving during the upgrade phase. To enhance security, the DBUA automatically locks new user accounts in the upgraded database. The DBUA then proceeds to create new configuration files (initialization parameter and listener files) in the new Oracle home. The Database Upgrade Assistant is fully compliant with Oracle RAC environments. In RAC environments, the DBUA upgrades all database and configuration files on all nodes in the cluster.
40
Command-Line Syntax Silent mode run Backup location Custom scripts
Initialization parameters Help EM configuration $ dbua –silent –dbName <Oracle database> $ dbua –backupLocation $ dbua -postUpgradeScripts $ dbua –initParam $ dbua -help Command-Line Syntax When invoked with the -silent command line option, the DBUA operates in silent mode. In the silent mode, the DBUA does not present a user interface. It also writes any messages (including information, errors, and warnings) to a log file in ORACLE_HOME/cfgtoollogs/dbua/SID/upgraden, where n is the number of upgrades that the DBUA has performed as of this upgrade. For example, the following command upgrades a database named ORCL in the silent mode: $ dbua -silent -dbName ORCL & Here is a list of important options that you can use: -backupLocation directory specifies a directory to back up your database before the upgrade starts. -postUpgradeScripts script [, script ] specifies a comma-delimited list of SQL scripts. Specify complete path names. The scripts are executed at the end of the upgrade. -initParam parameter=value [, parameter=value ] specifies a comma-delimited list of initialization parameter values of the form name=value. -emConfiguration {CENTRAL|LOCAL|ALL|NOBACKUP|NO |NONE}specifies the Oracle Enterprise Manager management options. Note: For more information about these options, see the Oracle Database Upgrade Guide 11g. $ dbua –emConfiguration
41
Using DBUA to Upgrade Your Database
To upgrade a database by using the DBUA graphical user interface: On Linux or UNIX platforms, enter the dbua command at system prompt in the Oracle Database 11g, Release 1 environment. The DBUA Welcome screen appears. Click Next. If an ASM instance is detected on the system, the Upgrade Operations page provides you with the options to upgrade a database or an ASM instance. If no ASM instance is detected, the Databases screen appears. At the Upgrade Operations page, select Upgrade a Database. This operation upgrades a database to Oracle Database 11g, Release 1. Oracle recommends that you upgrade the database and ASM in separate DBUA sessions in separate Oracle homes.
42
Choose Database to Upgrade and Diagnostic Destination
The Databases screen appears. Select the database that you want to upgrade from the Available Databases table. You can select only one database at a time. If you do not see the database that you want, make sure that an entry with the database name exists in the oratab file in the etc directory. If you are running the DBUA from a user account that does not have SYSDBA privileges, you must enter the user name and password credentials to enable SYSDBA privileges for the selected database. Click Next. The DBUA analyzes the database, performing the following pre-upgrade checks and displaying warnings as necessary: Redo log files whose size is less than 4 MB. If such files are found, the DBUA gives the option to drop or create new redo log files. Obsolete or deprecated initialization parameters When the DBUA finishes its checks, the Diagnostic Destination screen appears. Perform one of the following: Accept the default location for your diagnostic destination. Enter the full path to a different diagnostic destination in the Diagnostic Destination field. Click Browse to select a diagnostic destination.
43
Moving Database Files Moving Database Files
If you are upgrading a single-instance database, the Move Database Files page appears. However, if you are upgrading an Oracle Real Application Clusters database, the Move Database Files page does not appear. Select one of the following options: Do Not Move Database Files as Part of Upgrade Move Database Files during Upgrade If you choose to move database files, you must also select one of the following: File System: Your database files are stored on the host file system. Automatic Storage Management (ASM): Your database files are stored on the ASM storage, which must already exist on your system. If you do not have an ASM instance, you can create one using DBCA and then restart the DBUA. Click Next.
44
Database File Locations
The Database File Locations screen appears. Select one of the following options: Use Common Location for All Database Files: If you choose to have all of your database files in one location, you must also perform one of the following: Accept the default location for your database files. Enter the full path to a different location in the Database File Locations field. Click Browse and select a different location for your database files. Use Oracle-Managed Files: If you choose to use Oracle-Managed Files for your database files, you must also perform one of the following: Accept the default database area. Enter the full path to a different database area in the Database Area field. Click Browse and select a different database area. Use a Mapping File to Specify Location of Database Files: This option enables you to specify different locations for your database files. A sample mapping file is available in the logging location. You can edit the property values of the mapping file to specify a different location for each database file. Click Next.
45
Recovery Configuration
The Recovery Configuration page enables you to designate a Flash Recovery Area for your database. If you selected “Move Database Files during Upgrade,” or if an Oracle Express Edition database is being upgraded to Oracle Enterprise Edition, then a Flash Recovery Area must be configured. If a Flash Recovery Area is already configured, the current settings are retained but the screen is displayed to enable you to override these values. Click Next.
46
Management Options and Database Credentials
If no other database is already being monitored with Enterprise Manager, the Management Options page appears. On this page, you have the option of setting up your database so that it can be managed with Enterprise Manager. Before you can register the database with Oracle Enterprise Manager Grid Control, an Oracle Enterprise Manager Agent must be configured on the host computer. To set up your database to be managed with Enterprise Manager, select “Configure the Database with Enterprise Manager” and then select one of the proposed options. Click Next. The Database Credentials page appears. Choose one of the proposed options and click Next.
47
Network Configuration
If the DBUA detects that multiple listeners are configured, the “Network Configuration for the database” page appears. This page has two tabs. The Listeners tab is displayed if you have more than one listener. The Directory Service tab appears if you have the directory services configured. On the Listeners tab, select one of the following options: Register this database with all the listeners Register this database with selected listeners only If you choose to register selected listeners only, you must select the listeners that you want from the Available Listeners list, and then use the arrow buttons to move them to the Selected Listeners list. If you want to register your database with a directory service, click the Directory Service tab. On the Directory Service tab, select one of the following options: Yes, register the database: Selecting this option enables client computers to connect to this database without a local name file (tnsnames.ora) and also enables them to use the Oracle Enterprise User Security feature. No, don’t register the database If you choose to register the database, you must also provide a user distinguished name (DN) in the User DN field and a password for that user in the Password field. An Oracle Wallet is created as part of database registration. It contains credentials suitable for password authentication between this database and the directory service. Enter a password in both the Wallet Password field and the Confirm Password field. Then click Next.
48
Recompile Invalid Objects
The Recompile Invalid Objects page appears. Select “Recompile invalid objects at the end of upgrade” if you want the DBUA to recompile all invalid PL/SQL modules after the upgrade is complete. This ensures that you do not experience any performance issues when you begin using your newly upgraded database. If you have multiple CPUs, you can reduce the time it takes to perform this task by taking advantage of parallel processing on your available CPUs. If you have multiple CPUs, the DBUA automatically adds an additional section to the Recompile Invalid Objects page and automatically determines the number of CPUs you have available. The DBUA also provides a recommended degree of parallelism, which determines how many parallel processes are used to recompile your invalid PL/SQL modules. Specifically, the DBUA sets the degree of parallelism to one less than the number of CPUs you have available. You can adjust this default value by selecting a new value from the “Degree of Parallelism” list. Select “Turn off Archiving and Flashback logging, for the duration of upgrade” to reduce the time required to complete the upgrade. If the database is in the ARCHIVELOG or flashback logging mode, the DBUA gives you the choice of turning them off for the duration of the upgrade. If you choose this option, Oracle recommends that you perform an offline backup immediately after the upgrade. Click Next.
49
Database Backup and Space Checks
The Backup page appears. Select “Backup database” if you want the DBUA to back up your database. Oracle strongly recommends that you back up your database before starting the upgrade. If errors occur during the upgrade, you might be required to restore the database from the backup. If you use the DBUA to back up your database, it makes a copy of all your database files in the directory that you specify in the Backup Directory field. The DBUA performs this cold backup automatically after it shuts down the database, and before it begins performing the upgrade procedure. The cold backup does not compress your database files and the backup directory must be a valid file system path. You cannot specify a raw device for the cold backup files. In addition, the DBUA creates a batch file in the specified directory. You can use this batch file to restore the database files: On Windows operating systems, the file is db_name_restore.bat. On Linux and UNIX platforms, the file is db_name_restore.sh. If you choose not to use the DBUA for your backup, Oracle assumes that you have already backed up your database using your own backup procedures. Click Next. Note: If you decide to use the DBUA to back up your database, the DBUA checks that you have enough space before the backup is taken.
50
Database Upgrade Summary
The Summary page appears. It shows the following information about the upgrade before it starts: Name, version, and Oracle home of the old and new databases Database backup location, available space, and space required Warnings ignored Database components to be upgraded Initialization parameter changes Database file locations Listener registration Check all of the specifications. Then perform one of the following: If anything is incorrect, click Back repeatedly until you reach the screen where you can correct it. Click Finish if everything is correct.
51
Upgrade Progress and Results
The Progress screen appears, and the DBUA begins the upgrade. You might encounter error messages with the Ignore and Abort choices. If other errors appear, you must address them accordingly. If an error is severe and cannot be handled during the upgrade, you have the following choices: Click Ignore to ignore the error and proceed with the upgrade. You can fix the problem, restart the DBUA, and complete the skipped steps. Click Abort to terminate the upgrade process. If a database backup was taken by the DBUA, it asks if you want to restore the database. After the database has been restored, you must correct the cause of the error and restart the DBUA to perform the upgrade again. If you do not want to restore the database, the DBUA leaves the database in its present state so that you can proceed with a manual upgrade. After the upgrade has completed, the following message is displayed: “Upgrade is complete. Click “OK” to see the results of the upgrade.” When you click OK, the Upgrade Results screen appears. The Upgrade Results screen displays a description of the original and upgraded databases, and the changes made to the initialization parameters. The screen also shows the directory where various log files are stored after the upgrade. You can examine these log files to obtain more details about the upgrade process. Click Restore Database if you are not satisfied with the upgrade results.
52
Practice 1-2: Overview In the second part of this practice, you upgrade your ASM instance to an Oracle Database 11g environment.
53
You are now ready to use Oracle Database 11g, Release 1!
Perform any required post-upgrade steps. Make additional post-upgrade adjustments to the initialization parameters. Upgrade the Recovery Catalog. Test your applications and tune performance. Set the COMPATIBLE initialization parameter to 11.1 to make full use of the Oracle Database 11g, Release 1 features. is the minimum compatibility required for 11.1. You are now ready to use Oracle Database 11g, Release 1! After you have upgraded your database and before you can consider the database operational, you must complete some post-upgrade tasks regardless of whether you performed the upgrade either manually or by using the DBUA. Some of the more common tasks include: Update environment variables (Linux and UNIX systems only) Adjust initialization parameters as needed Upgrade the Recovery Catalog Test your applications and tune performance Upgrade the statistics tables created by the DBMS_STATS package Enable Oracle Database Vault Upgrade the TIMESTAMP data Use the latest Time Zone file for clients
54
Deprecated Features in Oracle Database 11g, Release 1
Oracle Ultra Search Java Development Kit (JDK) 1.4 CTXXPATH index Deprecated Features in Oracle Database 11g, Release 1 The slide lists the Oracle Database features that are deprecated in Oracle Database 11g, Release 1. Although they are supported in this release for backward compatibility, Oracle recommends that you migrate away from these deprecated features: Oracle Ultra Search Java Development Kit (JDK) 1.4: Oracle recommends that you use JDK 5.0; however, JDK 1.5 is also fully supported. CTXXPATH index: Oracle recommends that you use XMLIndex instead.
55
Important Initialization Parameter Changes
USER_DUMP_DEST BACKGROUND_DUMP_DEST CORE_DUMP_DEST UNDO_MANAGEMENT not set implies AUTO mode. To migrate to automatic undo management: Set UNDO_MANAGEMENT=MANUAL. Execute your workload. Execute the DBMS_UNDO_ADV.RBU_MIGRATION function. Create an undo tablespace based on previous size result. Set UNDO_MANAGEMENT=AUTO. DIAGNOSTIC_DEST Important Initialization Parameter Changes The DIAGNOSTIC_DEST initialization parameter replaces the USER_DUMP_DEST, BACKGROUND_DUMP_DEST, and CORE_DUMP_DEST parameters. Starting with Oracle Database 11g, the default location for all trace information is defined by DIAGNOSTIC_DEST, which defaults to $ORACLE_BASE/diag. Old parameters are ignored if specified. For more information about diagnostics, refer to the lesson titled “Diagnosability Enhancements.” A newly installed Oracle Database 11g instance defaults to automatic undo management mode, and, if the database is created with the DBCA, an undo tablespace is automatically created. A null value for the UNDO_MANAGEMENT initialization parameter now defaults to automatic undo management; in previous releases, it defaulted to manual undo management mode. You must therefore use caution when upgrading a previous release to Oracle Database 11g. Note: The CONTROL_MANAGEMENT_PACK_ACCESS initialization parameter specifies the Server Manageability Packs that should be active. The following packs are available: The DIAGNOSTIC pack includes AWR, ADDM, and so on. The TUNING pack includes SQL Tuning Advisor, SQL Access Advisor, and so on. A license for DIAGNOSTIC is required to enable the TUNING pack. Possible values for this parameter are NONE, DIAGNOSTIC, and DIAGNOSTIC+TUNING (default).
56
Notes only page Important Initialization Parameter Changes (continued)
To migrate to automatic undo management, perform the following steps: 1. Set UNDO_MANAGEMENT=MANUAL. 2. Start the instance again and run through a standard business cycle to obtain a representative workload. 3. After the standard business cycle completes, run the following function to collect the undo tablespace size: DECLARE utbsiz_in_MB NUMBER; BEGIN utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION; end; / This function runs a PL/SQL procedure that provides information about how to size your new undo tablespace based on the configuration, and use of the rollback segments in your system. The function returns the sizing information directly. 4. Create an undo tablespace of the required size and turn on automatic undo management by setting UNDO_MANAGEMENT=AUTO, or by removing the parameter. Note: For RAC configurations, repeat these steps on all instances. Notes only page
57
Direct NFS Client: Overview
Oracle Database 10g Oracle Database 11g Optional generic configuration parameters Oracle RDBMS kernel Oracle RDBMS kernel DBA Specific configuration parameters Specific kernel NFS driver Specific kernel NFS driver Variations across platforms Many parameters to tune NAS Storage (NFS V3) NAS Storage Ease of NFS configuration Direct NFS Client: Overview Direct NFS is implemented as a Direct Network File System client as part of the Oracle RDBMS kernel in the Oracle Disk Manager library. NAS-based storage systems use Network File System to access data. In Oracle Database 10g, NAS storage devices are accessed using the operating system–provided kernel Network File System driver, which requires specific configuration settings to ensure its efficient and correct usage with Oracle Database. The following are the major problems that arise from incorrectly specifying these configuration parameters: NFS clients are very inconsistent across platforms and vary across operating system releases. With more than 20 parameters to tune, manageability is affected. Oracle Direct Network File System implements the NFS version 3 protocol in the Oracle RDBMS kernel. The following are the main advantages of implementing Oracle Direct NFS: It enables complete control over the input/output path to Network File Servers. This results in predictable performance and enables simpler configuration management and a superior diagnosability. Its operations avoid the kernel Network File System layer bottlenecks and resource limitations. However, the kernel is still used for network communication modules. It provides a common Network File System interface for Oracle for potential use on all host platforms and supported Network File System servers. It enables improved performance through load balancing across multiple connections to Network File System servers and deep pipelines of asynchronous input/output operations with improved concurrency. Fewer parameters to tune
58
Direct NFS Configuration
1 Mount all expected mount points using kernel NFS driver. 2 (Optional) Create an oranfstab file. cp libodm11.so libodm11.so_stub ln -s libnfsodm11.so libodm11.so 3 Mount points lookup order server: MyDataServer1 path: path: export: /vol/oradata1 mount: /mnt/oradata1 $ORACLE_HOME/dbs/oranfstab Load balancing and failover /etc/oranfstab Direct NFS Configuration By default, Direct NFS attempts to serve mount entries found in /etc/mtab. No other configuration is required. You can optionally use oranfstab to specify additional Oracle-specific options to Direct NFS. For example, you can use oranfstab to specify additional paths for a mount point as shown in the example in the slide. When oranfstab is placed in $ORACLE_HOME/dbs, its entries are specific to a single database. However, when oranfstab is placed in /etc, it is global to all Oracle databases and thus, can contain mount points for all Oracle databases. Direct NFS looks for the mount point entries in the following order: $ORACLE_HOME/dbs/oranfstab, /etc/oranfstab, and /etc/mtab. It uses the first matched entry as the mount point. In all cases, Oracle requires that mount points be mounted by the kernel NFS system even when being served through Direct NFS. Oracle verifies kernel NFS mounts by cross-checking entries in oranfstab with the operating system NFS mount points. If a mismatch exists, Direct NFS logs an informational message and does not serve the NFS server. /etc/mtab
59
Notes only page Direct NFS Configuration (continued)
Complete the following procedure to enable Direct NFS: 1. Make sure that NFS mount points are mounted by your kernel NFS client. The file systems to be used through ODM NFS should be mounted and available over regular NFS mounts for Oracle to retrieve certain bootstrapping information. The mount options that are used in mounting the file systems are not relevant. 2. (Optional) Create an oranfstab file with the following attributes for each NFS server to be accessed using Direct NFS: Server: The NFS server name Path: Up to four network paths to the NFS server, specified either by IP address or by name, as displayed using the ifconfig command. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, Direct NFS reissues I/Os over any remaining paths. Export: The exported path from the NFS server Mount: The local mount point for the NFS server 3. Oracle Database uses the ODM library, libnfsodm10.so, to enable Direct NFS. To replace this standard ODM library with the ODM NFS library, complete the following steps: Change directory to $ORACLE_HOME/lib. Enter the following commands: cp libodm11.so libodm11.so_stub ln -s libnfsodm11.so libodm11.so Use one of the following methods to disable the Direct NFS client: Remove the oranfstab file. Restore the stub libodm11.so file by reversing the process you completed in step 3. Remove the specific NFS server or export paths in the oranfstab file. Notes If you remove an NFS path that the Oracle Database is using, you must restart the database for the change to be effective. If Oracle Database is unable to open an NFS server using Direct NFS, it uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up correctly. Additionally, an informational message is logged into the Oracle alert and trace files indicating that Direct NFS could not be established. With the current ODM architecture, there can be only one active ODM implementation for each instance at any given time. Using NFS ODM in an instance precludes any other ODM implementation. The Oracle files resident on the NFS server that are served by the Direct NFS Client are also accessible through the operating system kernel NFS client. The usual considerations for maintaining integrity of the Oracle files apply in this situation. Notes only page
60
Monitoring Direct NFS SVR_ID V$DNFS_FILES PNUM V$DNFS_STATS
Join column V$DNFS_SERVERS PNUM V$DNFS_STATS SVR_ID V$DNFS_CHANNELS Monitoring Direct NFS Use the following views for Direct NFS management: V$DNFS_SERVERS: Shows a table of servers accessed using Direct NFS V$DNFS_FILES: Shows a table of files currently open using Direct NFS V$DNFS_CHANNELS: Shows a table of open network paths (or channels) to servers for which Direct NFS is providing files V$DNFS_STATS: Shows a table of performance statistics for Direct NFS
61
Hot Patching: Overview
For a bug fix or diagnostic patch on a running Oracle instance, hot patching provides the ability to do the following: Install Enable Disable Hot Patching: Overview Hot patching provides the ability to install, enable, and disable a bug fix or diagnostic patch on a live, running Oracle instance. Using hot patching is the recommended solution for avoiding down time when applying hot patches. Oracle provides the capability to do hot patching with any Oracle database using the opatch command-line utility. Hot patches can be provided when the changed code is small in scope and complexity (for example, with diagnostic patches or small bug fixes).
62
Installing a Hot Patch Applying a hot patch does not require instance shutdown, relinking of the Oracle binary, or instance restart. OPatch can be used to install or uninstall a hot patch. OPatch detects conflicts between two hot patches, as well as between a hot patch and a conventional patch. Installing a Hot Patch Unlike traditional patching mechanisms, applying a hot patch does not require instance shutdown or restart. Similar to traditional patching, you can use OPatch to install a hot patch. You can determine whether a patch is a hot patch by using the following command: opatch query -is_online_patch <patch location> or opatch query <patch location> -all Note: The patched code is shipped as a dynamic/shared library, which is then mapped into memory by each Oracle process.
63
Benefits of Hot Patching
No down time and no interruption of business Extremely fast install and uninstall times Integrated with OPatch: Conflict detection Listed in patch inventory Works in RAC environment Although the on-disk Oracle binary is unchanged, hot patches persist across instance shutdown and startup. Benefits of Hot Patching You do not have to shut down your database instance while you apply the hot patch. Unlike conventional patching, hot patching is extremely fast to install and uninstall. Because hot patching uses OPatch, you get all the benefits that you already have with conventional patching that uses OPatch. It does not matter how long or how many times you shut down your database—a hot patch always persists across instance shutdown and startup.
64
Conventional Patching and Hot Patching
Conventional Patches Hot Patches Require down time to apply or remove Do not require down time to apply or remove Installed and uninstalled via OPatch Installed and uninstalled via OPatch Persist across instance startup an d shutdown Persist across instance startup and shutdown Take several minutes to install or uninstall Take only a few seconds to install or uninstall Conventional Patching and Hot Patching Conventional patching basically requires a shutdown of your database instance. Hot patching does not require any down time. Applications can keep running while you install a hot patch. Similarly, hot patches that have been installed can be uninstalled with no down time.
65
Hot Patching Considerations
Hot patches may not be available on all platforms. They are currently available on: Linux x86 Linux x86-64 Solaris SPARC64 Some extra memory is consumed. Exact amount depends on: Size of patch Number of concurrently running Oracle processes Minimum amount of memory: approximately one OS page per running Oracle process Hot Patching Considerations One operating system (OS) page is typically 4 KB on Linux x86 and 8 KB on Solaris SPARC64. With an average of approximately one thousand Oracle processes running at the same time, this represents around 4 MB of extra memory for a small hot patch.
66
Hot Patching Considerations
There may be a small delay (a few seconds) before every Oracle process installs or uninstalls a hot patch. Not all bug fixes and diagnostic patches are available as a hot patch. Use hot patches in situations when down time is not feasible. When down time is possible, you should install all relevant bug fixes as conventional patches. Hot Patching Considerations (continued) A vast majority of diagnostic patches are available as hot patches. For bug fixes, it really depends on their nature. Not every bug fix or diagnostic patch is available as a hot patch. But the long-term goal of the hot-patching facility is to provide hot-patching capabilities for Critical Patch Updates.
67
Practice 1-4: Overview In this practice, you create a database.
68
Summary In this lesson, you should have learned how to:
Install Oracle Database 11g Upgrade your database to Oracle Database 11g Use hot patching
69
Storage Enhancements
70
Objectives After completing this lesson, you should be able to:
Set up ASM fast mirror resync Use ASM preferred mirror read Understand scalability and performance enhancements Set up ASM disk group attributes Use the SYSASM role Use various new manageability options for CHECK, MOUNT, and DROP commands Use the ASMCMD md_backup, md_restore, and repair extensions Note: In this lesson, the term ASM data extent is shortened to extent.
71
Without ASM Fast Mirror Resync
1 ASM redundancy used 2 Disk access failure Primary extent Secondary extent Oracle Database 10g and 11g Without ASM Fast Mirror Resync ASM offlines a disk whenever it is unable to complete a write to an extent allocated to the disk, while writing at least one mirror copy of the same extent on another disk if the corresponding disk group uses ASM redundancy. With Oracle Database 10g, ASM assumes that an offline disk contains only stale data; therefore, it no longer reads from such disks. Shortly after a disk is put offline, ASM drops it from the disk group by re-creating the extents allocated to the disk on the remaining disks in the disk group using redundant extent copies. This process is a relatively costly operation and can take hours to complete. If the disk failure is only a transient failure (such as failure of cables, host bus adapters, controllers, or disk power supply interruptions), you have to add the disk again after the transient failure is fixed. However, adding the dropped disk back to the disk group incurs the additional cost of migrating the extents back to the disk. Disk added back: Extents rebalanced Disk automatically dropped: All dropped extents re-created 4 3
72
ASM Fast Mirror Resync: Overview
1 ASM redundancy used 2 Disk access failure Primary extent Secondary extent Oracle Database 11g ASM Fast Mirror Resync: Overview ASM fast mirror resync significantly reduces the time required to resynchronize a transient failure of a disk. When a disk goes offline following a transient failure, ASM tracks the extents that are modified during the outage. When the transient failure is repaired, ASM can quickly resynchronize only the ASM disk extents that have been affected during the outage. This feature assumes that the content of the affected ASM disks has not been damaged or modified. When an ASM disk path fails, the ASM disk is taken offline but not dropped if you have set the DISK_REPAIR_TIME attribute for the corresponding disk group. The setting for this attribute determines the duration of disk outage that ASM will tolerate while still being able to resynchronize after you complete the repair. Note: The tracking mechanism uses one bit for each modified extent. This ensures that the tracking mechanism is very efficient. Disk again accessible; need only to resync modified extents 4 3 Failure time < DISK_REPAIR_TIME
73
Using EM to Perform Fast Mirror Resync
When you offline an ASM disk in Oracle Enterprise Manager (EM), you are asked to confirm the operation. On the Confirmation page, you can override the default disk repair time. Similarly, you can view by failure group and choose a particular failure group to offline.
74
Using EM to Perform Fast Mirror Resync
Using EM to Perform Fast Mirror Resync (continued) You can also online disks by using Enterprise Manager.
75
Setting Up ASM Fast Mirror Resync
ALTER DISKGROUP dgroupA SET ATTRIBUTE 'DISK_REPAIR_TIME'='3H'; ALTER DISKGROUP dgroupA OFFLINE DISKS IN FAILGROUP contrl2 DROP AFTER 5H; ALTER DISKGROUP dgroupA ONLINE DISKS IN FAILGROUP contrler2 POWER 2 WAIT; ALTER DISKGROUP dgroupA DROP DISKS IN FAILGROUP contrl2 FORCE; Setting Up ASM Fast Mirror Resync You set up this feature on a per–disk group basis. You can do so after disk-group creation using the ALTER DISKGROUP command. Use a command similar to the following to enable ASM fast mirror resync: ALTER DISKGROUP SET ATTRIBUTE 'DISK_REPAIR_TIME'='2D4H30M' After you repair the disk, run the SQL statement ALTER DISKGROUP ONLINE DISK. This statement brings a repaired disk group back online to enable writes so that no new writes are missed. This statement also starts a procedure to copy all extents that are marked as stale on their redundant copies. You cannot apply the ONLINE statement to already-dropped disks. You can view the current attribute values by querying the V$ASM_ATTRIBUTE view. You can determine the time left before ASM drops an offlined disk by querying the REPAIR_TIMER column of either V$ASM_DISK or V$ASM_DISK_IOSTAT. In addition, a row corresponding to a disk resync operation appears in V$ASM_OPERATION with the OPERATION column set to SYNC.
76
Notes only page Setting Up ASM Fast Mirror Resync (continued)
You can also use the ALTER DISKGROUP OFFLINE DISK SQL statement to manually bring the ASM disks offline for preventive maintenance. With this command, you can specify a timer to override the timer that is defined at the disk-group level. After you complete maintenance, use the ALTER DISKGROUP ONLINE DISK statement to bring the disk online. If you cannot repair a failure group that is in the offline state, you can use the ALTER DISKGROUP DROP DISKS IN FAILGROUP command with the FORCE option. This ensures that data originally stored on these disks is reconstructed from redundant copies of the data and stored on other disks in the same disk group. Note: The time elapses only when the disk group is mounted. Furthermore, changing the value of DISK_REPAIR_TIME does not affect disks previously offlined. The default setting of 3.6 hours for DISK_REPAIR_TIME should be adequate for most environments. Notes only page
77
ASM Preferred Mirror Read: Overview
Site A Site B P S Site A Site B ASM Preferred Mirror Read: Overview When you configure ASM failure groups in Oracle Database 10g, ASM always reads the primary copy of a mirrored extent. It may be more efficient for a node to read from a failure group extent that is closest to the node, even if it is a secondary extent. This is especially true in extended cluster configurations (when nodes are spread across several sites) where reading from a local copy of an extent provides improved performance. With Oracle Database 11g, you can do this by configuring the preferred mirror read using the new initialization parameter, ASM_PREFERRED_READ_FAILURE_GROUPS, to specify a list of preferred mirror read names. The disks in those failure groups become the preferred read disks. Thus, every node can read from its local disks. This results in higher efficiency and performance, and reduced network traffic. The setting for this parameter is instance-specific. P P: Primary AU S S: Secondary AU
78
ASM Preferred Mirror Read: Setup
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEA On first instance ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEB On second instance Monitor SELECT preferred_read FROM v$asm_disk; SELECT * FROM v$asm_disk_iostat; ASM Preferred Mirror Read: Setup To configure this feature, set the new ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter. This parameter is a multivalued parameter and should contain a string with a list of failure group names separated by commas. Each failure group name specified should be prefixed with its disk group name and a ‘.’ character. This parameter is dynamic and can be modified using the ALTER SYSTEM command at any time. An example is shown in the slide. However, this initialization parameter is valid only for ASM instances. With the extended cluster, the failure groups specified in this parameter should contain only those disks that are local to the corresponding instance. The new column PREFERRED_READ has been added to the V$ASM_DISK view. Its format is a single character. If the disk group to which the disk belongs pertains to a preferred read failure group, the value of this column is Y. To identify specific performance issues with the ASM preferred read failure groups, use the V$ASM_DISK_IOSTAT view. This view displays the disk I/O statistics for each ASM client. If this view is queried from a database instance, only the rows for this instance are shown.
79
Enterprise Manager ASM Configuration Page
You can specify a set of disks as preferred disks for each ASM instance by using Enterprise Manager. The preferred read attributes are instance-specific. In Oracle Database 11g, the Preferred Read Failure Groups field (asm_preferred_read_failure_group) is added to the configuration page. This parameter only takes effect before the disk group is mounted or when the disk group is created. It applies only to newly opened files or to a newly loaded extend map for a file.
80
ASM Preferred Mirror Read: Best Practice
Two sites: normal redundancy Two sites: high redundancy P S P S S S S P Only two failure groups: one for each instance Max four failure groups: two for each instance Three sites: high redundancy P S S ASM Preferred Mirror Read: Best Practice In practice, there are only a limited number of good disk group configurations in an extended cluster. A good configuration takes into account both performance and availability of a disk group in an extended cluster. Here are some possible examples: For a two-site extended cluster, a normal redundancy disk group should have only two failure groups; all disks local to one site should belong to the same failure group. Also, no more than one failure group should be specified as a preferred read failure group by each instance. If there are more than two failure groups, ASM may not mirror a virtual extent across both sites. Furthermore, if the site with more than two failure groups were to go down, it would take the disk group down as well. If the disk group to be created is a high-redundancy disk group, at most two failure groups should be created on each site with its local disks, with both local failure groups specified as preferred read failure groups for the local instance. For a three-site extended cluster, a high redundancy disk group with three failure groups should be used. In this way, ASM can guarantee that each virtual extent has a mirror copy local to each site and that the disk group is protected against a catastrophic disaster on any of the three sites. Only three failure groups: one for each instance
81
ASM Scalability and Performance Enhancements
Extent size grows automatically according to file size. ASM supports variable extent sizes to: Raise the maximum possible file size Reduce memory utilization in the shared pool There are no administration needs other than manual rebalance in case of important fragmentation. ASM Scalability and Performance Enhancements ASM Variable Size Extents is an automated feature that enables ASM to support larger file sizes while improving memory usage efficiency. In Oracle Database 11g, ASM supports variable sizes for extents of 1, 8, and 64 allocation units (AU). ASM uses a predetermined number of extents of each size. As soon as a file crosses a certain threshold, the next extent size is used. With this feature, fewer extent pointers are needed to describe the file and less memory is required to manage the extent maps in the shared pool (which would have been prohibitive in large file configurations). Extent size can vary both across files and within files. Variable Size Extents also enables you to deploy Oracle databases using ASM that are several hundred TB (or even several PB) in size. Note: The management of Variable Size Extents is completely automated and does not require manual administration.
82
ASM Scalability and Performance Enhancements (continued)
However, external fragmentation may occur when a large number of noncontiguous small data extents have been allocated and freed, and no additional contiguous large extents are available. A defragmentation operation is integrated as part of any rebalance operation. As a result, as a DBA, you always have the ability to defragment your disk group by executing a rebalance operation. Nevertheless, this should happen only very rarely because ASM also automatically performs defragmentation during extents allocation if the desired size is unavailable. This can potentially make some allocation operations longer. Note: This feature also enables much faster file opens because of the significant reduction in the amount of memory that is required to store file extents. Note only page
83
ASM Scalability in Oracle Database 11g
ASM imposes the following limits: 63 disk groups 10,000 ASM disks 4 petabytes per ASM disk 40 exabytes of storage 1 million files per disk group Maximum file size: External redundancy: 140 PB Normal redundancy: 42 PB High redundancy: 15 PB ASM Scalability in Oracle Database 11g ASM imposes the following limits: 63 disk groups in a storage system 10,000 ASM disks in a storage system 4 petabytes maximum storage for each ASM disk 40 exabytes maximum storage for each storage system 1 million files for each disk group Maximum file size depends on the redundancy type of the disk groups used: 140 PB for external redundancy (value currently greater than possible database file size), 42 PB for normal redundancy, and 15 PB for high redundancy. Note: In Oracle Database 10g, the maximum ASM file size for external redundancy is 35 TB.
84
SYSASM Role Using the SYSASM role to manage ASM instances avoids overlap between DBAs and storage administrators. SYSDBA will be deprecated: Oracle Database 11g, Release 1 behaves as in 10g. In future releases, SYSDBA privileges will be restricted in ASM instances. SQL> CONNECT / AS SYSASM SQL> CREATE USER username IDENTIFIED by passwd; SQL> GRANT SYSASM TO username; SQL> CONNECT username/passwd AS SYSASM; SQL> DROP USER username; SYSASM Role This feature introduces a new SYSASM role that is specifically intended for performing ASM administration tasks. Using the SYSASM role instead of the SYSDBA role improves security by separating ASM administration from database administration. With Oracle Database 11g, Release 1, the OS group for SYSASM and SYSDBA is the same, and the default installation group for SYSASM is dba. In a future release, separate groups will have to be created, and SYSDBA users will be restricted in ASM instances. As a member of the dba group, you can currently connect to an ASM instance by using the first statement in the slide. You also have the ability to use the combination of CREATE USER and GRANT SYSASM SQL statements from an ASM instance to create a new SYSASM user. This can be useful for remote ASM administration. These commands update the password file of each ASM instance and do not need the instance to be up and running. Similarly, you can revoke the SYSASM role from a user by using the REVOKE command, and you can drop a user from the password file by using the DROP USER command. The V$PWFILE_USERS view integrates a new column called SYSASM, which indicates whether the user can connect with SYSASM privileges (TRUE) or not (FALSE). Note: With Oracle Database 11g, Release 1, if you log in to an ASM instance as SYSDBA, warnings are written in the corresponding alert.log file.
85
Using EM to Manage ASM Users
Oracle Enterprise Manager 11g enables you to manage the users who access the ASM instance through remote connection (using password file authentication). These users are used exclusively for the ASM instance. However, you have this functionality only when connected as the SYSASM user. It is hidden if you connect as the SYSDBA or SYSOPER user. When you click the Create button, the Create User page is displayed. When you click the Edit button, the Edit User page is displayed. By clicking the Delete button, you can delete the created users. Note: Oracle Database 11g adds the SYSASM role to the ASM instance login page.
86
ASM Disk Group Compatibility
The compatibility of each disk group is separately controllable: ASM compatibility controls ASM metadata on-disk structure. RDBMS compatibility controls the minimum consumer client level. This is useful with heterogeneous environments. Setting disk group compatibility is irreversible. DB instance ASM instance ASM disk group ASM Disk Group Compatibility There are two kinds of compatibility applicable to ASM disk groups: ASM compatibility: Dealing with the persistent data structures that describe a disk group RDBMS compatibility: Dealing with the capabilities of the clients (consumers of disk groups) The compatibility of each disk group is independently controllable. This is required to enable heterogeneous environments with disk groups from both Oracle Database 10g and Oracle Database 11g. These two compatibility settings are attributes of each ASM disk group: RDBMS compatibility refers to the minimum compatible version of the RDBMS instance that would allow the instance to mount the disk group. This compatibility determines the format of messages that are exchanged between the ASM and database (RDBMS) instances. An ASM instance is capable of supporting different RDBMS clients running at different compatibility settings. The database-compatible version setting of each instance must be greater than or equal to the RDBMS compatibility of all disk groups used by that database. Database instances are typically run from a different Oracle home than the ASM instance. This implies that the database instance may be running a different software version than the ASM instance. When a database instance first connects to an ASM instance, it negotiates the highest version that they both can support. The compatibility parameter setting of the database, the software version of the database, and the RDBMS compatibility setting of a disk group determine whether a database instance can mount a given disk group. COMPATIBLE >= COMPATIBLE.RDBMS <= COMPATIBLE.ASM <= COMPATIBLE
87
Notes only page ASM Disk Group Compatibility (continued)
ASM compatibility refers to the persistent compatibility setting controlling the format of data structures for ASM metadata on disk. The ASM compatibility level of a disk group must always be greater than or equal to the RDBMS compatibility level of the same disk group. ASM compatibility is concerned only with the format of the ASM metadata. The format of the file contents is determined by the database instance. For example, the ASM compatibility of a disk group can be set to 11.0 while its RDBMS compatibility could be This implies that the disk group can be managed only by ASM software whose software version is 11.0 or higher, while any database client whose software version is higher than or equal to 10.1 can use that disk group. The compatibility of a disk group needs to be advanced only when there is a change to either persistent disk structures or protocol messaging. However, advancing disk group compatibility is an irreversible operation. You can set the disk group compatibility by using either the CREATE DISKGROUP command or the ALTER DISKGROUP command. Note: In addition to disk group compatibilities, the compatible parameter (database compatible version) determines the features that are enabled. It applies to the database or ASM instance depending on the instance_type parameter. For example, setting it to 10.1 would preclude use of any new features that are introduced in Oracle Database 11g (disk online/offline, variable extents, and so on). Notes only page
88
ASM Disk Group Attributes
Name Property Values Description au_size C 1|2|4|8|16|32|64MB Size of allocation units in the disk group compatible.rdbms AC Valid database version Format of messages exchanged between DB and ASM compatible.asm AC Valid ASM instance version Format of ASM metadata structures on disk disk_repair_time AC 0 M to 232 D Length of time before removing a disk once OFFLINE template.tname. redundancy A UNPROTECT|MIRROR|HIGH Redundancy of specified template template.tname. stripe A COARSE|FINE Striping attribute of specified template C: CREATE A: ALTER CREATE DISKGROUP DATA NORMAL REDUNDANCY DISK '/dev/raw/raw1','/dev/raw/raw2' ATTRIBUTE 'compatible.asm'='11.1'; ASM Disk Group Attributes When you create or alter an ASM disk group, you can change its attributes by using the new ATTRIBUTE clause of the CREATE DISKGROUP or ALTER DISKGROUP commands. These attributes are briefly summarized in the table in the slide: ASM enables the use of different AU sizes that you specify when you create a disk group. The AU size can be 1 MB, 2 MB, 4 MB, 8 MB, 16 MB, 32 MB, or 64 MB. RDBMS compatibility: See the slide titled “ASM Disk Group Compatibility” for more information. ASM compatibility: See the slide titled “ASM Disk Group Compatibility” for more information. You can specify the DISK_REPAIR_TIME in units of minutes (M), hours (H), or days (D). If you omit the unit, the default is H. If you omit this attribute, then the default is 3.6H. You can override this attribute with an ALTER DISKGROUP statement. You can specify the redundancy attribute of the specified template. You can specify the striping attribute of the specified template. Note: For each defined disk group, you can look at all the defined attributes by using the V$ASM_ATTRIBUTE fixed view.
89
Using EM to Edit Disk Group Attributes
EM provides a simple way to store and retrieve environment settings related to disk groups. You can now set the compatible attributes from both the Create Disk Group page and the Edit Advanced Attributes for Disk Group page. The disk_repair_time attribute is added to only the Edit Advanced Attributes for Disk Group page. Note: For 11g ASM instances, the default ASM and Database compatibility values are 11.1.
90
Enhanced Disk Group Checks
Disk group check syntax is simplified. FILE and DISK options do the same as ALL. Additional checks performed: Alias Directories ALTER DISKGROUP DATA CHECK; Enhanced Disk Group Checks The CHECK disk group command is simplified to check all the metadata directories by default. The CHECK command lets you verify the internal consistency of the ASM disk group metadata. ASM displays summary errors and writes the details of the detected errors in the alert log. In earlier releases, you could specify this clause for ALL, DISK, DISKS IN FAILGROUP, and FILE. Those clauses have been deprecated because they are no longer needed. In the current release, the CHECK keyword performs the following operations: Checks the consistency of the disk (equivalent to CHECK DISK and CHECK DISK IN FAILGROUP in previous releases) Cross-checks all the file extent maps and allocation tables for consistency (equivalent to CHECK FILE in previous releases) Checks that the alias metadata directory and the file directory are linked correctly Checks that the alias directory tree is linked correctly Checks that ASM metadata directories do not have unreachable allocated blocks The REPAIR | NOREPAIR clause enables you to tell ASM whether or not to attempt to repair the errors found during the consistency check. The default is REPAIR. The NOREPAIR setting is useful when you want to be alerted to any inconsistencies but do not want ASM to take any automatic action to resolve them. Note: Introducing extra checks as part of check disk group slows down the entire check disk group operation.
91
Restricted Mount Disk Group for Fast Rebalance
A disk group can be mounted on a single instance only. No database client or other ASM instance can obtain access. Rebalance can proceed without locking overhead. 1 ALTER DISKGROUP data DISMOUNT; 2 ALTER DISKGROUP data MOUNT RESTRICT; 3 Maintenance task: Add/Remove disks … 4 ALTER DISKGROUP data DISMOUNT; Restricted Mount Disk Group for Fast Rebalance A new mount mode to mount a disk group in Oracle Database 11g is called RESTRICTED. When a disk group is mounted in RESTRICTED mode, clients cannot access the files in a disk group. When an ASM instance knows that there are no clients, the instance improves the performance of the rebalance operation by not attempting to message clients for locking/unlocking extent maps. A disk group mounted in RESTRICTED mode is mounted exclusively on only one node; clients of ASM on that node cannot use that disk group. The RESTRICTED mode allows you to perform all maintenance tasks on a disk group in the ASM instance without external interaction. At the end of the maintenance cycle, you must explicitly dismount the disk group and remount it in normal mode. The ALTER DISKROUP diskgroupname MOUNT command is extended to enable ASM to mount the disk group in RESTRICTED mode. An example is shown in the slide. When you use the RESTRICTED option to start up an ASM instance, all disk groups defined in the ASM_DISKGROUPS parameter are mounted in RESTRICTED mode. 5 ALTER DISKGROUP data MOUNT;
92
Mount Force Disk Group By default, MOUNT is NOFORCE: MOUNT with FORCE:
All disks must be available. MOUNT with FORCE: Offlines unavailable disks if quorum exists Fails if all disks are available ALTER DISKGROUP data MOUNT [FORCE|NOFORCE]; Mount Force Disk Group This feature alters the behavior of ASM when mounting an incomplete disk group. With Oracle Database 10g, as long as there are enough failure groups to mount a disk group, the mount operation succeeds, even when there are missing or damaged failure groups. This behavior has the potential to automatically drop ASM disks, requiring their addition again later after repair, and thus incurring a long rebalance operation. With Oracle Database 11g, such an operation fails unless you specify the new FORCE option when mounting the damaged disk group. This allows you to correct configuration errors (such as ASM_DISKSTRING set incorrectly) or connectivity issues before trying the mount again. However, disk groups mounted with the FORCE option could potentially have one or more disks offline if they were not available at the time of the mount. You must take corrective action before DISK_REPAIR_TIME expires to restore those devices. Failing to online those devices results in the disks being expelled from the disk group and costly rebalancing being required to restore redundancy for all the files in the disk group. Also, if one or more devices are offlined as a result of MOUNT FORCE, some or all files will not be properly protected until the redundancy is restored in the disk group via rebalance. Therefore, MOUNT with FORCE is useful when you know that some of the disks belonging to a disk group are unavailable. The disk group mount succeeds if ASM finds enough disks to form a quorum.
93
Notes only page Mount Force Disk Group (continued)
MOUNT with NOFORCE is the default option of MOUNT when none is specified. In NOFORCE mode, all disks that belong to a disk group must be accessible for the mount to succeed. Note: Specifying the FORCE option when it is not necessary also results in an error. There is also one special case in a cluster: If an ASM instance is not the first to mount the disk group, MOUNT FORCE fails with an error if disks are determined to be inaccessible locally but accessible by another instance. Notes only page
94
Forcing Disk Group Drop
Allows users to drop disk groups that cannot be mounted Fails if disk group is mounted anywhere DROP DISKGROUP data FORCE INCLUDING CONTENTS; Forcing Disk Group Drop Drop disk group force marks the headers of disks belonging to a disk group that cannot be mounted by the ASM instance as FORMER. However, the ASM instance first determines whether the disk group is being used by any other ASM instance using the same storage subsystem. If it is being used, and if the disk group is in the same cluster or on the same node, the statement fails. If the disk group is in a different cluster, the system checks further to determine whether the disk group is mounted by an instance in the other cluster. If the disk group is mounted elsewhere, the statement fails. However, this latter check is not as definitive as the checks for disk groups in the same cluster. You should therefore use this clause with caution. Note: When executing the DROP DISKGROUP command with the FORCE option, you must also specify the INCLUDING CONTENTS clause.
95
ASMCMD Extensions md_backup repair md_restore $ asmcmd help lsdsk full
User-created directories Templates Disk group compatibility Disk group name Disk names and failure groups md_backup full repair md_restore $ asmcmd help nodg newdg lsdsk ASMCMD Extensions ASMCMD is extended to include ASM metadata backup and to restore functionality. This provides the ability to re-create a preexisting ASM disk group with the exact template and alias directory structure. Currently, if an ASM disk group is lost, it is possible to restore the lost files by using RMAN—but you must manually re-create the ASM disk group and any required user directories or templates. ASM metadata backup and restore (AMBR) works in two modes: In backup mode, AMBR parses ASM fixed tables and views to gather information about existing disks and failure group configurations, templates, and alias directory structures. It then dumps this metadata information to a text file. In restore mode, AMBR reads the previously generated file to reconstruct the disk group and its metadata. You have the ability to control AMBR behavior in restore mode to do a full, nodg, or newdg restore. The difference among the three submodes is in whether you want to include the disk group creation and change its characteristics. The lsdsk command lists ASM disk information. This command can run in two modes: In connected mode, ASMCMD uses the V$ and GV$ views to retrieve disk information. In nonconnected mode, ASMCMD scans disk headers to retrieve disk information, using an ASM disk string to restrict the discovery set. The connected mode is always attempted first.
96
Notes only page ASMCMD Extensions (continued)
Bad block repair is a new feature that runs automatically on normal or high redundancy disk groups. When a normal read from an ASM disk group fails with an I/O error, ASM attempts to repair that block by reading from the mirror copy and writing to it, and by relocating it if the copy failed to produce a good read. This whole process happens automatically only on blocks that are read. It is possible that some blocks and extents on an ASM disk group are seldom read. One important example is the secondary extents. The ASMCMD repair command is designed to trigger a read on these extents, so the resulting failure in I/O can start the automatic block repair process. You can use the ASMCMD repair interface if the storage array returns an error on a physical block; ASMCMD repair can then initiate a read on that block to trigger the repair. Note: For more information about the syntax for each of these commands, see the Oracle Database Storage Administrator’s Guide. Notes only page
97
ASMCMD Extensions: Example
ASMCMD> md_backup –b jfv_backup_file -g data Disk group to be backed up: DATA# Current alias directory path: jfv ASMCMD> 1 2 Unintentional disk group drop ASMCMD> md_restore -b jfv_backup_file -t full -g data Disk group to be restored: DATA# ASMCMDAMBR-09358, Option -t newdg specified without any override options. Current Diskgroup being restored: DATA Diskgroup DATA created! User Alias directory +DATA/jfv created! ASMCMD> 3 ASMCMD Extensions: Example This example describes how to back up ASM metadata by using the md_backup command, and how to restore the data by using the md_restore command. The first statement specifies the –b option and the –g option of the command. This defines the name of the generated file containing the backup information as well as the disk group that needs to be backed up (jfv_backup_file and data, respectively, in the slide). In step 2, it is assumed that there is a problem in the DATA disk group. As a result, it gets dropped. Before you can restore the database files that the disk group contained, you have to restore the disk group itself. In step 3, you initiate the disk group re-creation as well as restoration of its metadata by using the md_restore command. Here you specify the name of the backup file generated in step 1 as well as the name of the disk group that you want to restore, plus the type of restore that you want. In this example, a full restore of the disk group is done because it no longer exists. After the disk group is re-created, you can restore its database files by using (for example) RMAN. 4 Restore disk group files by using RMAN
98
Summary In this lesson, you should have learned how to:
Set up ASM fast mirror resync Use ASM preferred mirror read Set up ASM disk group attributes Use the SYSASM role Use various new manageability options for CHECK, MOUNT, and DROP commands Use the ASMCMD md_backup, md_restore, and repair extensions
99
Practice 2: Overview This practice covers the following topics:
Using ASM fast mirror resync Using ASMCMD extensions
100
SQL Performance Analyzer
101
Change Management in Oracle Database 11g
Lesson 3: SQL Performance Analyzer Lesson 4: SQL Plan Management Lesson 5: Database Replay Change Management in Oracle Database 11g This lesson begins with a brief introduction to the Change Management features and benefits in Oracle Database 11g, which are covered in three lessons. Lesson 3 (“SQL Performance Analyzer”) begins on slide 9.
102
Challenges Faced by DBAs When Performing Changes
Maintaining service-level agreements through changes to hardware or software configurations Offering production-level workload environment for testing purposes Effectively forecasting and analyzing impact on SQL performance Challenges Faced by DBAs When Performing Changes Large business-critical applications are complex and have highly varying load and usage patterns. At the same time, these business systems are expected to provide certain service-level guarantees in terms of response time, throughput, uptime, and availability. Any change to a system (such as upgrading the database or modifying the configuration) often necessitates extensive testing and validation before these changes can make it to the production system. To be confident before moving to a production system, the database administrator (DBA) must expose a test system to a workload very similar to the workload to be experienced in a production environment. It is also beneficial for the DBA to have an effective way to analyze the impact of system-level changes on the overall SQL performance so that any required tuning changes can be performed before production.
103
Change Is the Only Constant
Change is the most common cause of instability. Enterprise production systems are complex. Actual workloads are difficult to simulate. possible! Realistic testing before production is impossible. Reluctance to make changes Inability to adopt new competitive technologies Change Is the Only Constant Oracle Database 11g is designed for data center environments that are rapidly evolving and changing to keep up with business demands, enabling DBAs to manage change effectively and efficiently. Building on the self-managing capabilities of Oracle Database 10g, Oracle Database 11g offers significant advances in the areas of automatic diagnostics, supportability, and change management. Oracle DBAs and information technology managers are leading the key initiatives in data centers today. Some of these data center initiatives are moving to low-cost computing platforms (such as Oracle Enterprise Linux) and simplifying storage management by using ASM. DBAs need to test the databases by using realistic workloads with new operating systems or storage platforms to ensure that migration is successful. Today’s enterprises must make significant investments in hardware and software to perform the infrastructure changes. For example, if the DBA wants to test the storage management of data files for a database, from file system–based to ASM for a typical J2EE application, the enterprise would need to invest in duplicate hardware for the entire application stack, including the Web server, application server, and database. The organization would also need to invest in expensive testing software to capture the end-user workload. These purchases make it very expensive for any organization to evaluate and implement changes to their data center infrastructure. Oracle Database 11g addresses this issue with a collection of solutions under the umbrella of “Change Management.” Preserve order amid change.
104
Lifecycle of Change Management
Test (Database Replay or SQL Performance Analyzer) Diagnose and resolve problems (advisors) Make change Set up test environments (snapshot standbys) Diagnose problems Provision for production Patches and workarounds Lifecycle of Change Management Oracle Database 11g supports realistic testing through the use of snapshot standbys to set up and test the physical environment. You can open a physical standby database temporarily (that is, activated) for read and write activities such as reporting and testing. Once testing is completed, you can then simply revert to the physical standby mode to allow catch-up to the primary site. This functionality preserves zero data loss and is similar to storage snapshots, but allows for disaster recovery and offers a single copy of storage at the time of testing. For enterprises to be able to perform an accurate test of a database environment, it is vital that they be able to reproduce the production scenarios accurately. Database Replay provides further support for realistic testing in Oracle Database 11g. Database Replay is designed to capture client requests on a given database to be reproduced on other copies of production databases. Oracle Enterprise Manager provides an easy-to-use set of steps to set up the capture of a workload. Some of the changes that a DBA deals with are database upgrades, new tuning recommendations, schema changes, statistics collection, and changes in operating system and hardware. DBAs can use SQL Performance Analyzer to track and forecast SQL performance changes caused by these changes. If SQL performance has regressed in some of the cases, the DBA can then run the SQL Tuning Advisor to tune the SQL statements. Realistic testing
105
Lifecycle of Change Management
Test Diagnose and resolve problems Make change Set up test environments Diagnose problems (ADR/Support Workbench) Provision for production (rolling upgrades) Patches and workarounds (Enterprise Manager) Lifecycle of Change Management (continued) When upgrading from Oracle Database 11g, Release 1, you can use the rolling upgrade functionality to ensure that various versions of the software can still communicate with each other. This allows independent nodes of an ASM cluster to be migrated or patched without affecting the availability of the database, thereby providing higher uptime and problem-free migration to new releases. ASM offers further system capacity planning and workload change enhancements (Fast Disk Resync, Preferred Mirror Read). Numerous enhancements to the online functionality (online index reorganization and online table redefinition) further support application change. Automatic Diagnostic Repository (ADR) is a new system-managed repository for storing and organizing trace files and other error diagnostic data. You get a comprehensive view of all the serious errors encountered by the database, and the relevant data needed for problem diagnosis and eventual resolution. You can also use EM Support Workbench, which provides a simple workflow interface to view and diagnose incident data, and package it for Oracle Support. The Data Recovery Advisor tool can be used to automatically diagnose data failures and report on the appropriate repair option. Oracle Database 11g Enterprise Manager supports end-to-end automation of patch application on single-instance database homes and rolling patches on clusterware. You no longer need to perform manual steps for shutting down your system, invoking OPatch, applying SQL, and other such best-practice steps in the patching procedure. Provisioning automation
106
Setting Up a Test Environment by Using the Snapshot Standby Database
Redo stream Physical standby database Open database as snapshot standby. Back out testing changes. Redo stream Snapshot standby database Perform testing Setting Up a Test Environment by Using the Snapshot Standby Database In Oracle Database 11g, a physical standby database can be opened temporarily (that is, activated) for read or write activities such as reporting and testing. A physical standby database in the snapshot standby state still receives redo data from the primary database, thereby providing data protection for the primary database while still in the reporting or testing database role. You convert a physical standby database to a snapshot standby database, and you open the snapshot standby database for writes by applications for testing. When you have completed testing, you discard the testing writes and catch up with the primary database by applying the redo logs. Creating a snapshot standby database was possible with the previous releases. However, Oracle Database 11g simplifies greatly the way you set up a snapshot standby database. For more information about snapshot standby databases, refer to the Oracle Data Guard Concepts and Administration Guide. Note: Another important feature is the real-time query capability of physical standby databases in Oracle Database 11g. This feature makes it possible to query a physical standby database while Redo Apply is active.
107
Benefits of Snapshot Standby
A snapshot standby database is activated from a physical standby database. Redo stream is continually accepted. Provides for disaster recovery Users can continue to query or update. Snapshot standby is open read/write. Benefits reporting applications Reduces storage requirements Benefits of Snapshot Standby A snapshot standby database is a database that is activated from a physical standby database to be used for reporting and testing. The snapshot standby database receives redo from the primary database and continues to provide data protection for the primary database. The snapshot standby database: Is like the primary database in that the users can perform queries or updates Is like a physical standby database in that it continues receiving redo data from the primary database A snapshot standby database provides the combined benefit of disaster recovery and of reporting and testing using a physical standby database. Although similar to storage snapshots, snapshot standby databases provide a single copy of storage while maintaining disaster recovery.
108
SQL Performance Analyzer
109
Objectives After completing this lesson, you should be able to:
Identify the benefits of using SQL Performance Analyzer Describe the SQL Performance Analyzer workflow phases Use SQL Performance Analyzer to ascertain performance gains following a database change
110
SQL Performance Analyzer: Overview
New 11g feature Targeted users: DBAs, QAs, application developers Helps predict the impact of system changes on SQL workload response time Builds different versions of SQL workload performance (that is, SQL execution plans and execution statistics) Executes SQL serially (concurrency not honored) Analyzes performance differences Offers fine-grained performance analysis on individual SQL Integrated with SQL Tuning Advisor to tune regressions SQL Performance Analyzer: Overview Oracle Database 11g introduces SQL Performance Analyzer, which gives you an exact and accurate assessment of the impact of change on the SQL statements that make up the workload. SQL Performance Analyzer helps you forecast the impact of a potential change on the performance of a SQL query workload. This capability provides DBAs with detailed information about the performance of SQL statements, such as before-and-after execution statistics, and statements with performance improvement or degradation. This enables you (for example) to make changes in a test environment to determine whether the workload performance will be improved through a database upgrade.
111
SQL Performance Analyzer: Use Cases
SQL Performance Analyzer is beneficial in the following use cases: Database upgrades Implementation of tuning recommendations Schema changes Statistics gathering Database parameter changes OS and hardware changes SQL Performance Analyzer: Use Cases SQL Performance Analyzer can be used to predict and prevent potential performance problems for any database environment change that affects the structure of the SQL execution plans. The changes can include (but are not limited to) any of the following: Database upgrades Implementation of tuning recommendations Schema changes Statistics gathering Database parameter changes OS and hardware changes DBAs can use SQL Performance Analyzer to foresee SQL performance changes that result from the preceding changes for even the most complex environments. As applications evolve through the development lifecycle, database application developers can test (for example) changes to schemas, database objects, and rewritten applications to mitigate any potential performance impact. SQL Performance Analyzer also enables the comparison of SQL performance statistics.
112
Usage Model: Capture SQL Workload
SQL Tuning Set (STS) is used to store SQL workload. Includes: SQL Text Bind variables Execution plans Execution statistics Incremental capture is used to populate STS from cursor cache over a period of time. STS’s filtering and ranking capabilities filter out undesirable SQL. Cursor cache Incremental capture Database Instance Usage Model: Capture SQL Workload The first step to using SQL Performance Analyzer is to capture the SQL statements that represent your workload. This is done using the SQL Tuning Set technology. Production database
113
Usage Model: Transport to a Test System
Cursor cache Database instance Database instance Production database Test database Copy SQL Tuning Set to staging table (“pack”). Transport staging table to test system (data pump, DB link, etc). Copy SQL Tuning Set from staging table (“unpack”). Usage Model: Transport to a Test System The second step is to transport these SQL statements to a similar system that is being tested. Here, STS can be exported from production and then imported into a test system.
114
Usage Model: Build Before Change Performance
Before change, SQL performance version is the SQL workload performance baseline. SQL performance = execution plans + execution statistics Test/execute SQL in STS: Produce execution plans and statistics. Execute SQL serially (no concurrency). Every SQL is executed only once. Skip DDL/DML effects. Explain plan SQL in STS to generate SQL plans only. Test/execute Before changes Database instance Usage Model: Build Before Change Performance The third step is to capture a baseline of the test system performance consisting of the execution plan and execution statistics. Test database
115
Usage Model: Build After Change Performance
Manually implement the planned change: Database upgrade Implementation of tuning recommendations Schema changes Statistics gathering Database parameter changes OS and hardware changes Reexecute SQL after change: Test/execute SQL in STS to generate SQL execution plans and statistics. Explain plan SQL in STS to generate SQL plans. After changes Database instance After changes implemented Usage Model: Build After Change Performance The fourth step is to make the changes to the test system and then rerun the SQL statements to assess the impact of the changes on the SQL performance. Test database
116
Usage Model: Compare and Analyze Performance
Rely on user-specified metric to compare SQL performance: elapsed_time, buffer_gets, disk_reads, ... Calculate impact of change on individual SQLs and SQL workload: Overall impact on workload Net SQL impact on workload Use SQL execution frequency to define a weight of importance. Detect improvements, regressions, and unchanged performance. Detect changes in execution plans. Recommend running SQL Tuning Advisor to tune regressed SQLs. Analysis results can be used to seed SQL Plan Management baselines. SQL Tuning Advisor Improvement Regression Compare analysis Database instance Usage Model: Compare and Analyze Performance Enterprise Manager provides the tools to make a full comparison of performance data, including execution statistics such as elapsed time, CPU time, and buffer gets. If the SQL performance has regressed in some of the cases, the DBA must then run SQL Tuning Advisor to tune the SQL statements—either immediately or at a scheduled time. As with any tuning strategy, it is recommended that only one change be implemented at a time and retested before making further changes. Test database
117
SQL Performance Analyzer: Summary
Capture SQL workload on production. Transport the SQL workload to a test system. Build “before-change” performance data. Make changes. Build “after-change” performance data. Compare results from steps 3 and 5. Tune regressed SQL. SQL Performance Analyzer: Summary 1. Gather SQL: In this phase, you collect the set of SQL statements that represent your SQL workload on the production system. You can use SQL Tuning Sets or Automatic Workload Repository (AWR) to capture the information to transport. Because AWR essentially captures high-load SQLs, you should consider modifying the default AWR snapshot settings and captured Top SQL to ensure that AWR captures the maximum number of SQL statements. This ensures more complete SQL workload capture. 2. Transport: Here you transport the resultant workload to the test system. The STS is exported from the production system and the STS is imported into the test system. 3. Compute “before-version” performance: Before any changes take place, you execute the SQL statements, collecting baseline information that is needed to assess the impact that a future change might have on the performance of the workload. The information collected in this stage represents a snapshot of the current state of the system workload. The performance data includes: Execution plans (for example, generated by explain plan) Execution statistics (for example, includes elapsed time, buffer gets, disk reads, and rows processed) 4. Make a change: After you have the before-version data, you can implement your planned change and start viewing the impact on performance.
118
Notes only page SQL Performance Analyzer: Summary (continued)
5. Compute “after-version” performance: This step takes place after the change is made in the database environment. Each statement of the SQL workload runs under a mock execution (collecting statistics only), collecting the same information as captured in step 3. 6. Compare and analyze SQL Performance: After you have both versions of the SQL workload performance data, you can carry out the performance analysis by comparing the after-version data with the before-version data. The comparison is based on the execution statistics, such as elapsed time, CPU time, and buffer gets. 7. Tune regressed SQL: At this stage, you have identified exactly which SQL statements may cause performance problems when the database change is made. Here, you can use any of the database tools to tune the system. For example, you can use SQL Tuning Advisor or Access Advisor against the identified statements and then implement those recommendations. Alternatively, you can seed SQL Plan Management (SPM) with plans captured in step 3 to guarantee that the plans remain the same. After implementing any tuning action, you should repeat the process to create a new after-version and analyze the performance differences to ensure that the new performance is acceptable. Notes only page
119
Capturing the SQL Workload
Create SQL Tuning Set (STS) on original system. Create a staging table and upload STS in staging table. Export staging table to test system. Unpack staging table to STS on test system. Capturing the SQL Workload Capturing SQL workload is done using SQL Tuning Sets (STS). This concept is not new in Oracle Database 11g, and it follows exactly the same workflow as with previous releases of the database. This workflow is briefly described in the slide. You can use either Enterprise Manager wizards or the DBMS_SQLTUNE PL/SQL package. With Oracle Database 11g, you access the SQL Tuning Sets page from the Performance tab in Database Control. The workload that you capture should reflect a representative period of time (in captured SQL statements) that you wish to test under some changed condition. The following information is captured in this process: The SQL text The execution context (including bind values, parsing schema, and compilation environment), which contains a set of initialization parameters under which the statement is executed The execution frequency, which tells how many times the SQL statement has been executed during the time interval of the workload Normally the capture SQL happens on the production system to capture the workload running on it. The performance data is computed later on the test system by the compute SQL performance processes. SQL Performance Analyzer tracks the SQL performance of the same STS before and after a change is made to the database.
120
Creating a SQL Performance Analyzer Task
EM helps you manage each component in the SQL Performance Analyzer process and reports the analysis result. The workflow and user interface apply to both EM Database Control and EM Grid Control. You access SQL Performance Analyzer from the “Software and Support” tab of Database Control. Alternatively, select Database Instance > Advisor Central > Advisors > SQL Performance Analyzer. SQL Performance Analyzer offers three workflows for you to test different scenarios: Optimizer Upgrade Simulation: Test the effects of specified optimizer version changes on SQL Tuning Set performance. A SQL Performance Analyzer task is created and an initial trial run is performed with the optimizer_features_enable parameter set to an initial value. A second trial run is performed with the optimizer_features_enable parameter set to the targeted version. A replay trial comparison report is then run for the two trials. Parameter Change: Test and compare an initialization parameter change on SQL Tuning Set performance. A SQL Performance Analyzer task is created and an initial trial run is performed with the parameter set to the base value. A second trial run is performed with the parameter set to the changed value. A replay trial comparison report is then run for the two trials. Guided Workflow: Create a SQL Performance Analyzer task and execute custom experiments by using manually created replay trials.
121
Optimizer Upgrade Simulation
This page enables you to create a task that measures the performance impact on a SQL Tuning Set when the database is upgraded from one version to another. In the example in the slide, the simulated upgrade is done from to (You can go back to ) To create an analysis task, you must specify the following details: Enter the name of the task and (optionally) a description. Specify the STS to use for this analysis. It must already be created. Select the Per-SQL Time Limit from the list to specify the time limit for the execution of each SQL statement: UNLIMITED: There is no time limit for the execution of each SQL statement. EXPLAIN ONLY: The test plan is generated but not executed. CUSTOMIZE: You can customize the execution time limit. Select the Optimizer Versions to indicate the original version of the database and the new version to which the database is being upgraded. Two replay trials are created. The first captures STS performance with the original optimizer version, and the second uses the targeted version. Select the Comparison Metric to be used to evaluate the performance impact due to the database upgrade. Specify the Schedule for the task. Click Submit to proceed with the analysis.
122
SQL Performance Analyzer: Tasks
After you create your SQL Performance Analyzer task, it might take a long time for it to be executed depending on the number of statements that are contained in your SQL Tuning Set. While your task is executing, you can click Refresh on the SQL Performance Analyzer page until you see a green tick in the Last Run Status column for your task in the SQL Performance Analyzer Tasks table. After execution, you can click the link corresponding to the name of your task in the SQL Performance Analyzer Tasks table. This directs you to the corresponding SQL Performance Analyzer Task page.
123
SQL Performance Analyzer Task Page
A SQL Performance Analyzer Task allows you to execute a specific SQL Tuning Set under changed environmental conditions. After you execute the task, you can assess the impact of these changes on the performance of the SQL Tuning Set. The Comparison Report is useful in assessing the impact of the changed environmental conditions on the performance of the specified SQL Tuning Set. From this page, you can also: Create a Replay Trial to test the performance of a SQL Tuning Set under a specific environment. Click Create Replay Trial. Refer to the Guided Workflow page for detailed information about creating a Replay Trial. Run a Replay Trial Comparison to compare the differences between the Replay Trials that have been created so far. A Comparison Report is generated for each Replay Trial Run. Click Run Replay Trial Comparison. Refer to the Guided Workflow page for detailed information about running a Replay Trial Comparison. Click the eyeglass icon in the Comparison Report column to view the Replay Trial Comparison Report for your task.
124
Comparison Report Comparison Report
Use the SQL Performance Analyzer Task Result page to see the Replay Trial Comparison Report. The following general details are displayed: Task details such as name, owner, and description of the task Name and owner of the SQL Tuning Set Total number of SQL statements and any SQL statements with errors. Click the SQL Statements With Errors link to access the Errors table. The Replay Trials being compared and comparison metric being used In addition to these details, you can view the following: Projected Workload [Comparison Metric]: This chart shows the projected workload for each Replay Trial based on the comparison metric along with the improvement, regression, and overall impact. Click the impact links to drill down to the complete list of SQL statements in each category. SQL Statement Count: This chart shows the number of SQL statements that have improved, regressed, or not changed performance based on the comparison metric. The colors of the bars indicate whether the plan changed between the two trial runs. Click the links or the data buckets to access the SQL Statement Count Details page, where you can see a list of SQL statements, and then click a SQL ID to access the SQL details. “Top 10 SQL Statements Based on Impact on Workload” table: This table allows you to click a specific SQL ID to drill down to the corresponding SQL Details page.
125
Comparison Report Comparison Report (continued)
On the SQL Details page, you can view the SQL statements and a line-by-line comparison between each Replay Trial Run for each statistic. You can also find the explain plan for each trial.
126
Tuning Regressing Statements
From the SQL Performance Analyzer Task Result page, you can directly tune all regressing statements by invoking SQL Tuning Advisor. To do so, click the Run SQL Tuning Advisor button to access the Schedule SQL Tuning Task page, where you can specify the tuning task name and a schedule. When you are finished, click OK. This creates a new tuning task that analyzes all regressing statements found by SQL Performance Analyzer.
127
Tuning Regressing Statements
Tuning Regressing Statements (continued) After the SQL tuning task is created, you return to the SQL Performance Analyzer Task Result page, where you can clearly see (in the Recommendations section) that you now have a tuning task associated with your performance analysis. Click the SQL Tune Report link to access the corresponding SQL Tuning Results page, where you can see the Recommendations table that lists all recommendations for regressing statements. You can also access the SQL Tuning Results page directly from the SQL Performance Analyzer Task page by clicking the eyeglass icon in the SQL Tune Report column of the Replay Trial Comparisons section for your trial comparison.
128
Preventing Regressions
Instead of using SQL Tuning Advisor to tune your regressing statements, you can also prevent regressions by using the SQL plan baselines. You can do so from the SQL Performance Analyzer Task Result page by clicking the Create SQL Plan Baselines button. Note: For more information about SQL plan baselines, refer to the lesson titled “SQL Plan Manageability.”
129
Parameter Change Analysis
Use the Parameter Change page to create a task that allows you to test the performance impact on a SQL Tuning Set when the value of the initialization parameter is changed. This option is very useful because it is difficult to forecast whether changing the parameter value will have a positive or negative impact. To create a task, do the following: Enter the name of the task and a description. Click the Select icon and select a SQL Tuning Set from the list. Select the Per-SQL Time Limit from the list to specify the time limit for the execution of each SQL statement. Click the Select icon to select an initialization parameter from the list. Specify the current value (Base Value) and the new value (Changed Value) for the initialization parameter. Select the comparison metric that will be used to evaluate the performance impact due to the change. Specify the schedule for the task. After the task has been created, an initial trial run is performed with the initialization parameter set to the Base Value. A second trial run is performed with the initialization parameter set to the Changed Value. Finally, a Replay Trial Comparison report is generated for the two trials with the specified comparison metric.
130
Guided Workflow Analysis
You can use the Guided Workflow page to define a sequence of steps to execute a two-trial SQL Performance Analyzer test. The steps are as follows: Create a SQL Performance Analyzer task based on a SQL Tuning Set. Replay the SQL Tuning Set in the initial environment: Any changes to the trial environment that affect the STS must be made manually before the Replay Trial is executed. These trials may include changing initialization parameters, gathering optimizer statistics, and creating indexes. Create the Replay Trial using the changed environment: You can now create the second Replay Trial using the changed environment by specifying all the necessary information. Performance differences between the trials are attributed to the environmental differences. Create the Replay Trial Comparison using trials from previous steps: This allows you to assess the performance impact on the STS when each Replay Trial is executed. View the Trial Comparison Report: You can now generate the Replay Trial Comparison report. Note: Before submitting a replay trial, you must select the “Trial environment established” option on the corresponding task page. However, you must manually make the necessary changes.
131
SQL Performance Analyzer: PL/SQL Example
exec :tname:= dbms_sqlpa.create_analysis_task( sqlset_name => 'MYSTS', task_name => 'MYSPA'); exec dbms_sqlpa.execute_analysis_task(task_name => :tname, - execution_type => 'TEST EXECUTE', execution_name => 'before'); select dbms_sqlpa.report_analysis_task(task_name => :tname, type=>'text', section=>'summary') FROM dual; Make changes exec dbms_sqlpa.execute_analysis_task(task_name => :tname, - execution_type => 'TEST EXECUTE', execution_name => 'after'); select dbms_sqlpa.report_analysis_task(task_name => :tname, type=>'text', section=>'summary') FROM dual; exec dbms_sqlpa.execute_analysis_task(task_name => :tname, execution_type => 'COMPARE PERFORMANCE'); SQL Performance Analyzer: PL/SQL Example The general example in the slide shows you how to use the DBMS_SQLPA package to invoke SQL Performance Analyzer to access the SQL performance impact of some changes. You could easily adapt this example to run your own analysis. 1. Create the tuning task to run SQL Performance Analyzer. 2. Execute the task once to build the before-change performance data, and produce the before- change report (special settings for report: set long , longchunksize , and linesize 90). With this call, you can specify various parameters, some of which are: Set the execution_type parameter in either of the following ways: Set to EXPLAIN PLAN to generate explain plans for all SQL statements in the SQL workload. Set to TEST EXECUTE to execute all SQL statements in the SQL workload. The procedure executes only the query part of the DML statements to prevent side-effects to the database or user data. When TEST EXECUTE is specified, the procedure generates execution plans and execution statistics. Specify execution parameters by using the execution_params parameter that needs to be specified as dbms_advisor.arglist(name,value,…). The time_limit parameter specifies the global time limit to process all SQL statements in a SQL Tuning Set before timing out. The local_time_limit parameter specifies the time limit to process each SQL statement in a SQL Tuning Set before timing out. select dbms_sqlpa.report_analysis_task(task_name => :tname, type=>'text', section=>'summary') FROM dual;
132
Notes only page SQL Performance Analyzer: PL/SQL Example (continued)
3. Make your changes. 4. Execute the task again after making the changes, and get the after-changes report. 5. Compare the two executions and get the analysis report. Using different execution parameters, you can execute the following command: EXEC DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => :tname, - execution_type => 'compare performance', execution_params => dbms_advisor.arglist( 'execution_name1', 'before', 'execution_name2', 'after', 'comparison_metric', 'buffer_gets')); Note: For more information about the DBMS_SQLPA package, see the Oracle Database PL/SQL Packages and Types Reference Guide. Notes only page
133
SQL Performance Analyzer: Data Dictionary Views
Modified views in Oracle Database 11g: DBA{USER}_ADVISOR_TASKS: Displays details about the analysis task DBA{USER}_ADVISOR_FINDINGS: Displays analysis findings New views in Oracle Database 11g: DBA{USER}_ADVISOR_EXECUTIONS: Lists metadata information for task execution DBA{USER}_ADVISOR_SQLPLANS: Displays the list of SQL execution plans DBA{USER}_ADVISOR_SQLSTATS: Displays the list of SQL compilation and execution statistics SQL Performance Analyzer: Data Dictionary Views DBA{USER}_ADVISOR_SQLPLANS: Displays the list of all SQL execution plans (or those owned by the current user) DBA{USER}_ADVISOR_SQLSTATS: Displays the list of SQL compilation and execution statistics (or those owned by the current user) DBA{USER}_ADVISOR_TASKS: Displays details about the advisor task created to perform an impact analysis of a system environment change DBA{USER}_ADVISOR_EXECUTIONS: Lists metadata information for a task execution. SQL Performance Analyzer creates a minimum of three executions to perform a change impact analysis on a SQL workload: one execution to collect performance data for the before-change version of the workload, the second execution to collect data for the after-change version of the workload, and a final execution to perform the actual analysis. DBA{USER}_ADVISOR_FINDINGS: Displays analysis findings. The advisor generates four types of findings: performance regression, symptoms, errors, and informative messages.
134
Summary In this lesson, you should have learned how to:
Identify the benefits of using SQL Performance Analyzer Describe the SQL Performance Analyzer workflow phases Use SQL Performance Analyzer to determine performance gains following a database change
135
Practice 3: Overview This practice covers the following topics:
Capturing SQL Tuning Sets Migrating SQL Tuning Sets from Oracle Database 10g to Oracle Database 11g Using SQL Performance Analyzer in an upgrade scenario
136
SQL Plan Management
137
Objectives After completing this lesson, you should be able to:
Set up SQL Plan Management Set up various SQL Plan Management scenarios
138
SQL Plan Management: Overview
SQL Plan Management is automatically controlled SQL plan evolution. Optimizer automatically manages SQL plan baselines. Only known and verified plans are used. Plan changes are automatically verified. Only comparable or better plans are used going forward. Can pre-seed critical SQL with STS from SQL Performance Analyzer SQL Plan Management: Overview Potential performance risk occurs when the SQL execution plan changes for a SQL statement. A SQL plan change can occur due to a variety of reasons like optimizer version, optimizer statistics, optimizer parameters, schema definitions, system settings, and SQL profile creation. Various plan control techniques (such as stored outlines and SQL profiles) have been introduced in the past versions of Oracle Database to address performance regressions due to plan changes. However, these techniques are reactive processes that require manual intervention. SQL Plan Management is a new feature introduced with Oracle Database 11g that enables the system to automatically control SQL plan evolution by maintaining what is called SQL plan baselines. With this feature enabled, a newly generated SQL plan can integrate a SQL plan baseline only if it has been proven that doing so will not result in performance regression. So, during execution of a SQL statement, only a plan that is part of the corresponding SQL plan baseline can be used. As described later in this lesson, SQL plan baselines can be automatically loaded or can be seeded using SQL Tuning Sets. Various scenarios are covered later in this lesson. The main benefit of the SQL Plan Management feature is the performance stability of the system through the avoidance of plan regressions. In addition, it saves the DBA time that is often spent in identifying and analyzing SQL performance regressions and finding workable solutions.
139
SQL Plan Baseline: Architecture
SYSAUX SQL Management Base Statement log Plan history Plan history Plan baseline Plan baseline HJ GB HJ GB HJ GB HJ GB … HJ GB HJ GB SQL profile … Repeatable SQL statement Plan History Plan baseline HJ GB HJ GB HJ GB … Automatic SQL Tuning task SQL Plan Baseline: Architecture The SQL Plan Management (SPM) feature introduces necessary infrastructure and services in support of plan maintenance and performance verification of new plans. For SQL statements that are executed more than once, the optimizer maintains a history of plans for individual SQL statements. The optimizer recognizes a repeatable SQL statement by maintaining a statement log. A SQL statement is recognized as repeatable when it is parsed or executed again after it has been logged. After a SQL statement is recognized as repeatable, various plans generated by the optimizer are maintained as a plan history containing relevant information (such as SQL text, outline, bind variables, and compilation environment) that is used by the optimizer to reproduce an execution plan. As an alternative or complement to the automatic recognition of repeatable SQL statements and the creation of their plan history, manual seeding of plans for a set of SQL statements is also supported. A plan history contains different plans generated by the optimizer for a SQL statement over time. However, only some of the plans in the plan history may be accepted for use. For example, a new plan generated by the optimizer is not normally used until it has been verified not to cause a performance regression. Plan verification is done “out of the box” as part of Automatic SQL Tuning running as an automated task in a maintenance window. Plan verification before integration to baseline
140
Notes only page SQL Plan Baseline: Architecture (continued)
An Automatic SQL Tuning task targets only high-load SQL statements. For them, it automatically implements actions such as making a successfully verified plan an accepted plan. A set of acceptable plans constitutes a SQL plan baseline. The very first plan generated for a SQL statement is obviously acceptable for use; therefore, it forms the original plan baseline. Any new plans subsequently found by the optimizer are part of the plan history but not part of the plan baseline initially. The statement log, plan history, and plan baselines are stored in the SQL Management Base (SMB), which also contains SQL profiles. The SMB is part of the database dictionary and is stored in the SYSAUX tablespace. The SMB has automatic space management (for example, periodic purging of unused plans). You can configure the SMB to change the plan retention policy and set space size limits. Note: With Oracle Database 11g, if the database instance is up but the SYSAUX tablespace is OFFLINE, the optimizer is unable to access SQL management objects. This can affect performance on some of the SQL workload. Notes only page
141
Loading SQL Plan Baselines
dbms_spm OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE Plan history Plan history Plan baseline HJ GB Plan baseline 1 HJ GB load_plans_from_cursor_cache load_plans_from_sqlset HJ GB alter_sql_plan_baseline 2 *_stgtab_baseline HJ GB 3 Staging table Cursor cache Plan history Plan baseline HJ GB 4 Loading SQL Plan Baselines There are two ways to load SQL plan baselines. On the fly capture: Uses automatic plan capture by setting the initialization parameter OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES to TRUE. This parameter is set to FALSE by default. Setting it to TRUE turns on automatic recognition of repeatable SQL statements and automatic creation of plan history for such statements. This is illustrated in the left graphic in the slide, where you can see the first generated SQL plan automatically integrated into the original SQL plan baseline. Bulk loading: Uses the DBMS_SPM package, which enables you to manually manage SQL plan baselines. With this package, you can load SQL plans into a SQL plan baseline directly from the cursor cache or from an existing SQL Tuning Set (STS). For a SQL statement to be loaded into a SQL plan baseline from an STS, the SQL statement needs to store its SQL plan in the STS. DBMS_SPM enables you to change the status of a baseline plan from accepted to not accepted (and from not accepted to accepted). It also enables you to export baseline plans from a staging table, which can then be used to load SQL plan baselines on other databases. DBA
142
Evolving SQL Plan Baselines
variable report clob exec :report:=DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE(sql_handle=>'SYS_SQL_593bc74fca8e6738'); Print report Plan history Plan baseline Automatic SQL Tuning HJ GB DBA HJ GB >? SQL Tuning Advisor Evolving SQL Plan Baselines During the SQL plan baseline evolution phase, Oracle Database routinely evaluates the performance of new plans and integrates plans with better performance into SQL plan baselines. When the optimizer finds a new plan for a SQL statement, the plan is added to the plan history as a nonaccepted plan. The plan is then verified for performance relative to the SQL plan baseline performance. When it is verified that a nonaccepted plan does not cause a performance regression (either manually or automatically), the plan is changed to an accepted plan and integrated into the SQL plan baseline. Successful verification of a nonaccepted plan consists of comparing its performance to that of one plan selected from the SQL plan baseline and ensuring that it delivers better performance. There are two ways to evolve SQL plan baselines: By using the DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE function. An invocation example is shown in the slide. The function returns a report that tells you whether some of the existing history plans were moved to the plan baseline. You can also specify specific plans in the history to be tested. By running SQL Tuning Advisor: SQL plan baselines can be evolved by manually or automatically tuning SQL statements using SQL Tuning Advisor. When SQL Tuning Advisor finds a tuned plan and verifies its performance to be better than a plan chosen from the corresponding SQL plan baseline, it makes a recommendation to accept a SQL profile. When the SQL profile is accepted, the tuned plan is added to the corresponding SQL plan baseline.
143
Important Baseline SQL Plan Attributes
Plan history Plan baseline HJ GB HJ GB Enabled but not accepted Enabled and accepted select signature, sql_handle, sql_text, plan_name, origin, enabled, accepted, fixed, autopurge from dba_sql_plan_baselines; SIGNATURE SQL_HANDLE SQL_TEXT PLAN_NAME ORIGIN ENA ACC FIX AUT 8.062E+18 SYS_SQL_6fe2 select.. SYS_SQL_PLAN_1ea AUTO-CAPTURE YES NO NO YES 8.062E+18 SYS_SQL_6fe2 select.. SYS_SQL_PLAN_4be AUTO-CAPTURE YES YES NO YES … exec :cnt := dbms_spm.alter_sql_plan_baseline(sql_handle => 'SYS_SQL_37e0168b0…3efe', - plan_name => 'SYS_SQL_PLAN_8dfc352f359901ea', attribute_name => 'ENABLED', attribute_value => 'NO'); Important Baseline SQL Plan Attributes When a plan enters the plan history, it is associated with a number of important attributes: SIGNATURE, SQL_HANDLE, SQL_TEXT, and PLAN_NAME are important identifiers for search operations. ORIGIN allows you to determine whether the plan was automatically captured (AUTO-CAPTURE), manually evolved (MANUAL-LOAD), automatically evolved by SQL Tuning Advisor (MANUAL-SQLTUNE), or automatically evolved by Automatic SQL Tuning (AUTO-SQLTUNE). ENABLED and ACCEPTED: The ENABLED attribute means that the plan is enabled for use by the optimizer. If ENABLED is not set, the plan is not considered. The ACCEPTED attribute means that the plan was validated as a good plan, either automatically by the system or manually when the user changes it to ACCEPTED. When a plan changes to ACCEPTED, it will become not ACCEPTED only when DBMS_SPM.ALTER_SQL_PLAN_BASELINE() is used to change its status. An ACCEPTED plan can be temporarily disabled by removing the ENABLED setting. A plan must be ENABLED and ACCEPTED for the optimizer to consider using it. FIXED means that the optimizer considers only those plans and not other plans. For example, if you have 10 baseline plans and three of them are marked FIXED, the optimizer uses only the best plan from these three, ignoring all the others. A SQL plan baseline is said to be FIXED if it contains at least one enabled fixed plan. If new plans are added to a fixed SQL plan baseline, these new plans cannot be used until they are manually declared as FIXED.
144
Notes only page Important Baseline SQL Plan Attributes (continued)
You can look at each plan’s attributes by using the DBA_SQL_PLAN_BASELINES view, as shown in the slide. You can then use the DBMS_SPM.ALTER_SQL_PLAN_BASELINE function to change some of them. You can also remove plans or a complete plan history by using the DBMS_SPM.DROP_SQL_PLAN_BASELINE function. The example shown in the slide changes the ENABLED attribute of SYS_SQL_PLAN_8DFC352F359901EA to NO. Note: The DBA_SQL_PLAN_BASELINES view contains additional attributes that enable you to determine when each plan was last used and whether a plan should be automatically purged. Notes only page
145
SQL Plan Selection … > dbms_xplan.display_sql_plan_baseline
Plan part of history? HJ GB optimizer_use_ plan_baselines=true? Yes No Plan history Yes Plan baseline No HJ GB HJ GB HJ GB … HJ GB Plan part of baseline? Yes No Select baseline plan with lowest best-cost HJ GB HJ GB HJ GB HJ GB Yes dbms_xplan.display(…,'BASIC +NOTE) or plan_table(other_xml) > No SQL Plan Selection If you are using automatic plan capture, the first time that a SQL statement is recognized as repeatable, its best-cost plan is added to the corresponding SQL plan baseline. That plan is then used to execute the statement. The optimizer uses a comparative plan selection policy when a plan baseline exists for a SQL statement and the initialization parameter OPTIMIZER_USE_SQL_PLAN_BASELINES is set to TRUE (default value). Each time a SQL statement is compiled, the optimizer first uses the traditional cost-based search method to build a best-cost plan. Then it tries to find a matching plan in the SQL plan baseline. If a match is found, it proceeds as usual. If no match is found, it first adds the new plan to the plan history, then costs each of the accepted plans in the SQL plan baseline, and picks the one with the lowest cost. The accepted plans are reproduced using the outline that is stored with each of them. So the effect of having a SQL plan baseline for a SQL statement is that the optimizer always selects one of the accepted plans in that SQL plan baseline. With SQL Plan Management, the optimizer can produce a plan that could be either a best-cost plan or a baseline plan. This information is dumped in the other_xml column of the plan_table upon explain plan. In addition, you can use the new dbms_xplain.display_sql_plan_baseline function to display one or more execution plans for the specified sql_handle of a plan baseline. If plan_name is also specified, the corresponding execution plan is displayed.
146
Notes only page SQL Plan Selection (continued)
Note: To preserve backward compatibility, if a stored outline for a SQL statement is active for the user session, the statement is compiled using the stored outline. In addition, a plan generated by the optimizer using a stored outline is not stored in the SMB even if automatic plan capture has been enabled for the session. Although there is no explicit migrate procedure for stored outlines, they can be migrated to SQL plan baselines by using the LOAD_PLAN_FROM_CURSOR_CACHE or LOAD_PLAN_FROM_SQLSET procedures from the DBMS_SPM package. When the migration is complete, you should disable or drop the original stored outline. Notes only page
147
Possible SQL Plan Manageability Scenarios
Database Upgrade New Application Deployment Oracle Database 11g Production database Plan History Plan History Plan baseline Plan baseline HJ GB HJ GB HJ GB HJ GB No plan regressions No plan regressions DBA DBA Plan history Plan baseline HJ GB HJ GB HJ GB Well- tuned plan Well-tuned plan Baseline plans staging table Possible SQL Plan Manageability Scenarios Database upgrade: Bulk SQL plan loading is especially useful when the system is being upgraded from an earlier version to Oracle Database 11g. For this, you can capture plans for a SQL workload into a SQL Tuning Set (STS) before the upgrade, and then load these plans from the STS into the SQL plan baseline immediately after the upgrade. This strategy can minimize plan regressions resulting from the use of the new optimizer version. New application deployment: The deployment of a new application module means the introduction of new SQL statements into the system. The software vendor can ship the application software along with the appropriate SQL plan baselines for the new SQL being introduced. Because of the plan baselines, the new SQL statements will initially run with the plans that are known to give good performance under a standard test configuration. However, if the customer system configuration is very different from the test configuration, the plan baselines can be evolved over time to produce better performance. In both scenarios, you can use the automatic SQL plan capture after manual loading to make sure that only better plans will be used for your applications in the future. Note: In all scenarios in this lesson, assume that OPTIMIZER_USE_SQL_PLAN_BASELINES is set to TRUE. Oracle Database 10g Development database
148
SQL Performance Analyzer and SQL Plan Baseline Scenario
Oracle Database 11g Before change O_F_E=10 Plan History Plan HJ GB baseline HJ GB Regressing statements No plan regressions After change O_F_E=11 optimizer_features_enable HJ GB HJ GB HJ GB Well- tuned plans SQL Performance Analyzer and SQL Plan Baseline Scenario A variation of the first method described in the previous slide is through the use of SQL Performance Analyzer. You can capture pre–Oracle Database 11g plans in an STS and import them into Oracle Database 11g. Then set the initialization parameter optimizer_features_enable to 10g to make the optimizer behave as if this were a 10g Oracle database. Next run SQL Performance Analyzer for the STS. When that is complete, set the initialization parameter optimizer_features_enable back to 11g and rerun SQL Performance Analyzer for the STS. SQL Performance Analyzer produces a report that lists a SQL statement whose plan has regressed from 10g to 11g. For those SQL statements that are shown by SQL Performance Analyzer to incur performance regression due to the new optimizer version, you can capture their plans using an STS and then load them into the SMB. This method represents the best form of the plan-seeding process because it helps prevent performance regressions while preserving performance improvements upon database upgrade. Oracle Database 10g
149
Loading a SQL Plan Baseline Automatically
Oracle Database 11g No plan regressions Oracle Database 11g No plan regressions Plan history New plan waiting verification Plan history Plan baseline Plan baseline HJ GB HJ GB HJ GB HJ GB HJ GB optimizer_features_enable= optimizer_features_enable= optimizer_capture_plan_baselines=true optimizer_capture_plan_baselines=true Oracle Database 11g Better plans Plan history Plan baseline HJ GB HJ GB HJ GB HJ GB HJ GB Well- tuned plans Loading a SQL Plan Baseline Automatically: Scenario Another upgrade scenario involves using the automatic SQL plan capture mechanism. In this case, set the initialization parameter OPTIMIZER_FEATURES_ENABLE (OFE) to the pre–Oracle Database 11g version value for an initial period of time such as a quarter, and execute your workload after upgrade by using the automatic SQL plan capture. During this initial time period, because of the OFE parameter setting, the optimizer is able to reproduce pre–Oracle Database 11g plans for a majority of the SQL statements. Because automatic SQL plan capture is also enabled during this period, the pre–Oracle Database 11g plans produced by the optimizer are captured as SQL plan baselines. When the initial time period ends, you can remove the setting of OFE to take advantage of the new optimizer version while incurring minimal or no plan regressions due to the plan baselines. Regressed plans will use the previous optimizer version; nonregressed statements will benefit from the new optimizer version. Oracle Database 10g optimizer_features_enable= optimizer_capture_plan_baselines=true
150
Purging SQL Management Base Policy
SQL> exec dbms_spm.configure('SPACE_BUDGET_PERCENT',20); SQL> exec dbms_spm.configure('PLAN_RETENTION_WEEKS',105); DBA_SQL_MANAGEMENT_CONFIG time Alert log 105 SQL Management Base SYSAUX 53 1% 10% 20% 50% space Purging SQL Management Base Policy The space occupied by the SQL Management Base (SMB) is checked weekly against a defined limit. A limit based on the percentage size of the SYSAUX tablespace is defined. By default, the space budget limit for the SMB is set to 10 percent of SYSAUX size. However, you can configure SMB and change the space budget to a value between 1 percent and 50 percent by using the DBMS_SPM.CONFIGURE procedure. If SMB space exceeds the defined percent limit, warnings are written to the alert log. Warnings are generated weekly until the SMB space limit is increased, the size of SYSAUX is increased, or the size of SMB is decreased by purging some of the SQL management objects (such as SQL plan baselines or SQL profiles). The space management of SQL plan baselines is done proactively using a weekly purging task. The task runs as an automated task in the maintenance window. Any plan that has not been used for more than 53 weeks is purged. However, you can configure SMB and change the unused plan retention period to a value between 5 weeks and 523 weeks (a little more than 10 years). To do so, use the DBMS_SPM.CONFIGURE procedure. You can look at the current configuration settings for the SMB by examining the DBA_SQL_MANAGEMENT_CONFIG view. In addition, you can manually purge the SMB by using the DBMS_SPM.DROP_SQL_PLAN_BASELINE function (as shown in the example in the slide). SQL> exec :cnt := dbms_spm.drop_sql_plan_baseline('SYS_SQL_37e0168b04e73efe');
151
Enterprise Manager and SQL Plan Baselines
Use the SQL Plan Management page to manage SQL profiles, SQL patches, and SQL plan baselines from one location rather than separate locations in Enterprise Manager. You can also enable, disable, drop, pack, unpack, load, and evolve selected baselines. From this page, you can also configure the various SQL plan baseline settings.
152
Summary In this lesson, you should have learned how to:
Set up SQL Plan Management Set up various SQL Plan Management scenarios
153
Practice 4: Overview This practice covers the use of SQL Plan Management.
154
Database Replay
155
Objectives After completing this lesson, you should be able to:
Identify the benefits of using Database Replay List the steps involved in Database Replay Use Enterprise Manager to record and replay workloads
156
Why Use Database Replay?
System changes (such as hardware and software upgrades) are a fact of life. Customers want to identify the full impact of changes before going live. Extensive testing and validation can be expensive in time and money. Despite expensive testing, success rates are low: Many issues go undetected. Changes can negatively affect system availability and performance. Cause of low success rate: Inability to properly test with real-world production workloads, with many issues going undetected. The Database Replay feature makes it possible to do real-world testing. Why Use Database Replay? Large business-critical applications are complex and experience highly varying load and usage patterns. At the same time, these business systems are expected to provide certain service-level guarantees in terms of response time, throughput, uptime, and availability. Often any change to a system, such as upgrading the database or modifying the configuration, necessitates extensive testing and validation before these changes can make it to the production system. To be confident before moving to a production system, the DBA needs to expose a test system to a workload very similar to the workload to be experienced in a production environment. It is also beneficial for the DBA to have an effective means of analyzing the impact of system-level changes on overall SQL performance so that any tuning changes required can be performed before production.
157
Database Replay Re-create actual production database workload in test environment. Identify and analyze potential instabilities before making changes to production. Capture workload in production: Capture full production workload with real load & concurrency Move the captured workload to test system Replay workload in test: Make the desired changes in test system Replay workload with production load & concurrency Honor commit ordering Analyze and report: Errors Data divergence Performance divergence Database Replay Oracle Database 11g provides specific solutions to the challenges described in the preceding slides. Database Replay allows you to test the impact of a system change by replaying real-world workload on the test system before it is exposed to a production system. The production workload (including transaction concurrency and dependency) of the database server is recorded over an illustrative period of time (for example, a peak period). This recorded data is used to replay the workload on a test system that has been appropriately configured. You gain a high degree of confidence in the overall success of the database change by subjecting the database server in a test system to a workload that is practically indistinguishable from a production workload.
158
System Architecture: Capture
Capture directory Shadow capture file Shadow Shadow Shadow Shadow Shadow capture file Recording infrastructure Database stack Shadow capture file Shadow capture file Background Background Database backup System Architecture: Capture Here you see an illustration of a system that is being recorded. You should always record a workload that spans an “interesting” period in a production system. Typically, the replay of the recording is used to determine whether it is safe to upgrade to a new version of the RDBMS server. A special recording infrastructure built into the RDBMS records data about all external client requests while the production workload is running on the system. External requests are any SQL queries, PLSQL blocks, PLSQL remote procedure calls, DML statements, DDL statements, Object Navigation requests, or OCI calls. During the recording, background jobs and, in general, all internal clients continue their work without being recorded. The end product is a workload recording containing all necessary information for replaying the workload as seen by the RDBMS in the form of external requests. The recording infrastructure imposes minimal performance overhead (extra CPU, memory, and I/O) on the recording system. You should, however, plan to accommodate the additional disk space needed for the actual workload recording. RAC Note: Instances in a RAC environment have access to the common database files. However, they do not need to share a common general-purpose file system. In such an environment, the workload recording is written on each instance’s file system during recording. For processing and replay, all parts of the workload recording need to be manually copied into a single directory. Production database
159
System Architecture: Processing the Workload
Capture directory Shadow capture file Shadow capture file Process capture files Database stack Shadow capture file Shadow capture file Background Background Database backup System Architecture: Processing the Workload The workload capture data is processed, and new workload replay-specific metadata files are created that are required for the replay of the given workload capture. Only new files are created; no files are modified that were created during the workload capture. Because of this, you can run the preprocess multiple times on the same capture directory (for example, when the procedure encounters unexpected errors or is canceled). External client connections are remapped at this stage. Any replay parameters that affect the replay outcome can be modified. Note: Because processing workload capture can be relatively expensive, the best practice is to do that operation on a system other than the production database system. Production database Process capture
160
System Architecture: Replay
Replay system Replay client Replay client Capture directory Shadow capture file Shadow … Shadow Shadow … Shadow Shadow capture file Process capture files Database stack Shadow capture file Shadow capture file Background Background Test system with changes Database backup System Architecture: Replay Before replaying the workload on the replay system, be sure to do the following: 1. Restore the replay database on a test system to match the capture database at the start of the workload capture. 2. Make changes (such as performing an upgrade) to the test system as needed. 3. Copy the workload to the test system. The workload recording is consumed by a special application called the replay driver, which sends requests to the RDBMS on which the workload is replayed. The RDBMS on which the workload is replayed is usually a test system. It is assumed that the database of the replay system is suitable for the replay of the workload that was recorded. The internal RDBMS clients are not replayed. The replay driver is a special client that consumes the workload recording and sends appropriate requests to the test system to make it behave as if the external requests were sent by the clients used during the recording of the workload (see previous example). The use of a special driver that acts as the sole external client to the RDBMS enables the record-and-replay infrastructure to be client agnostic. The replay driver consists of one or more clients that connect to the replay system and the replay driver sends requests based on the workload capture. The replay driver equally distributes the workload capture streams among all the replay clients based on the network bandwidth, CPU, and memory capability. Test database
161
The Big Picture Pre-change production system Post-change test system
Clients/app servers Capture directory Replay system Shadow capture file Shadow capture file Process capture files Shadow capture file Test system with changes Production system Shadow capture file Production database Database backup Database restore Can use Snapshot Standby as test system The Big Picture With Oracle Database 11g managing system changes, a significant benefit is the added confidence to the business in the success of performing the change. The record-and-replay functionality offers confidence in the ease of upgrade during a database server upgrade. A useful application of Database Replay is to test the performance of a new server configuration. Consider a customer who is utilizing a single instance database and wants to move to a RAC setup. The customer records the workload of an interesting period and then sets up a RAC test system for replay. During replay, the customer is able to monitor the performance benefit of the new configuration by comparing the performance to the recorded system. This can also help convince the customer to move to a RAC configuration after seeing the benefits of using the Database Replay functionality. Another application is debugging. You can record and replay sessions emulating an environment to make bugs more reproducible. Manageability feature testing is another benefit. Self-managing and self-healing systems need to implement this advice automatically (“autonomic computing model”). Multiple replay iterations allow testing and fine-tuning of the control strategies’ effectiveness and stability. Many Oracle customers have expressed strong interest in this change-assurance functionality. The database administrator, or a user with special privileges granted by the DBA, initiates the record-and-replay cycle and has full control over the entire procedure.
162
Pre-Change Production System
Changes not supported Clients/app servers Supported changes Database upgrades, patches Schema, parameters RAC nodes, interconnect OS platforms, OS upgrades CPU, memory Storage Production system Production database Pre-Change Production System Database Replay focuses on recording and replaying the workload that is directed to the RDBMS. Therefore, recording the workload is done at the point indicated in the diagram in the slide. Recording at the RDBMS in the software stack makes it possible to exchange anything below this level and test the new setup using the record-and-replay functionality. While replaying the workload, the RDBMS performs the actions observed during recording. In other words, during the replay phase the RDBMS code is exercised in a way that is very similar to the way it was exercised during the recording phase. This is achieved by re-creating all external client requests to the RDBMS. External client requests include all the requests by all possible external clients of the RDBMS.
163
Supported Workloads Supported Limitations
All SQL (DML, DDL, PL/SQL) with practically all types of binds Full LOB functionality (cursor-based and direct OCI) Local transactions Logins and logoffs Session switching Limited PL/SQL RPCs Limitations Direct path load, import/export OCI-based object navigation (ADTs) and REF binds Streams, non-PL/SQL-based AQ Distributed txns, remote describe/commit operations Flashback (Database and Query) Shared Server Supported Workloads The slide shows supported and nonsupported database operations.
164
Capture Considerations
Planning Adequate disk space for captured workload (binary files) Database restart: Only way to guarantee authentic replay Startup restrict Capture will un-restrict May not be necessary depending on the workload A way to restore database for replay purposes: Physical restore (scn/time provided) Logical restore of application data Flashback/snapshot-standby Filters can be specified to capture subset of the workload. SYSDBA or SYSOPER privileges and appropriate OS privileges Overhead Performance overhead for TPCC is 4.5% Memory overhead : 64 KB per session Disk space Capture Considerations You perform the following tasks in the planning phase of the workload recording: Check the database backup strategy, ensuring that the database can be restored to StartSCN when the recording starts. Plan the capture period by selecting it based on the application and the peak periods. You can use existing manageability features such as Automatic Workload Repository (AWR) and Active Session History (ASH) to select an appropriate period based on workload history. The starting time for capture should be carefully planned because it is recommended that you shut down and restart the database before starting the capture. Specify the location of the workload capture data. You must set up a directory that is to be used to store the workload capture data. You should provide ample disk space because the recording stops if there is insufficient disk space. However, everything captured up to that point is usable for replay. Define capture filters for user sessions that are not to be captured. You can specify a recording filter to skip sessions that should not be captured. No new privileges or user roles are introduced with the Database Replay functionality. The recording user and replay user must have either the SYSDBA or SYSOPER privilege. This is because a user having only SYSOPER or SYSDBA can start up or shut down the database to start the recording. Correct operating system (OS) privileges should also be assigned so that the user is able to access the recording, replay directories, and manipulate the files under those directories.
165
Replay Considerations
Preprocess captured workload One-time action On same DB version as replay Can be performed anywhere (production, test system, or other system) if versions match Restore database, and then perform the change: Upgrade Schema changes OS change Hardware change Add instance Replay Considerations The preprocess phase is a once-only required action for the specified database version. After the necessary metadata has been created, you can replay the workload as many times as required. You must restore the replay database to match the capture database at the start of the workload capture. A successful replay depends on the application transactions accessing the application data identical to that on the capture system. You can choose to restore the application data using point-in-time recovery, flashback, and import/export.
166
Replay Considerations
Manage external interactions Remap connection strings to be used for the workload: One-to-one: For simple instance-to-instance remapping Many-to-one: Use of load balancer (e.g., single node to RAC) Modify DB Links and directory objects that point to production systems Set up one or more replay clients Multithreaded clients that can each drive multiple workload sessions Replay Considerations (continued) A captured workload may contain references to external systems that are meaningful only in the capture environment. Replaying a workload with unresolved references to external systems may cause unexpected problems in the production environment. A replay should be performed in a completely isolated test environment. You should make sure that all references to external systems have been resolved in the replay environment such that replaying a workload will cause no harm to your production environment. You can make one-to-one or many-to-one remappings. For example, database links in a captured production environment may reference external production databases that should not be referenced during replay. Therefore, you should modify any external references that could jeopardize the production environment during replay. The replay client (an executable named wrc) submits a captured session’s workload. You should install one or more replay clients, preferably on systems other than the production host. Each replay client must be able to access the directory that holds the preprocessed workload. You can also modify the replay parameters to change the behavior of the replay.
167
Replay Options Synchronized replay: Unsynchronized replay:
Ensures minimal data divergence Commit-based synchronization Unsynchronized replay: Useful for load/stress testing Original commit ordering not honored High data divergence Think time options: Auto (default) Adjust think time to maintain the captured request rate: 0%: No think time (highest possible request rate) <100%: Higher request rate 100%: Exact think time >100%: Lower request rate Login time options Percentage (default is 100%) Replay Options The following replay options can be modified while replaying your workload: The synchronization parameter determines whether synchronization will be used during workload replay. If this parameter is set to TRUE, the COMMIT order in the captured workload will be preserved during replay and all replay actions will be executed only after all dependent COMMIT actions have completed. The default value is TRUE. The think_time_scale parameter scales the elapsed time between two successive user calls from the same session; it is interpreted as a percentage value. Use this parameter to increase or decrease the replay speed. Setting this parameter to 0 will send user calls to the database as fast as possible during replay. The default value is 100. The think_time_auto_correct parameter corrects the think time (based on the think_time_scale parameter) between calls, when user calls take longer to complete during replay than during capture. It is interpreted as a percentage value. The connect_time_scale parameter scales the elapsed time from when the workload capture started to when the session connects with the specified value; it is interpreted as a percentage. Use this option to manipulate the session connect time during replay. The default value is 100. Note: During workload capture, elapsed time is measured by user time and user think time. User time is the elapsed time of a user call to the database. User think time is the elapsed time while the user waits between issuing calls. During workload replay, elapsed time is measured by user time, user think time, and synchronization time.
168
Replay Analysis Data divergence Error divergence Performance
Number of rows compared for each call (queries, DML) Error divergence New errors Mutated errors Errors that have disappeared Performance Capture and Replay report ADDM report ASH report for skew analysis AWR report Replay Analysis There may be some divergence of the replay compared to what was recorded. For example, when replaying on a newer version of the RDBMS, a new algorithm may cause specific requests to be faster, resulting in divergence appearing as a faster execution. This is considered a desirable divergence. Another example of a divergence is when a SQL statement returns fewer rows during replay than those returned during recording. This is clearly undesirable. For data divergence, the result of an action can be considered as: The result set of SQL query An update to persistent database state A return code or error code Performance divergence is useful in determining how new algorithms introduced in the replay system may affect overall performance. There are numerous factors that can cause replay divergence. Though some of them cannot be controlled, others can be mitigated. It is the task of the DBA to understand the workload run-time operations and take the necessary actions to reduce the level of record-and-replay divergence. Online divergence should aid the decision to stop a replay that has diverged significantly. The results of the replay before the divergence may still be useful, but further replay would not produce reliable conclusions. Offline divergence reporting is used to determine how successful the replay was after the replay has finished.
169
Notes only page Replay Analysis (continued)
Data divergence of the replay encompasses the results of both queries and errors. That is, errors that occurred during recording are considered proper results and any change during replay is reported. You can use existing tools such as ADDM to measure performance differences between the recording system and the replay system. Additionally, error comparison reports during the replay report on the following: Errors that did not happen during recording Errors that were not reproduced during replay Difference in error type Notes only page
170
Database Replay Workflow in Enterprise Manager
Capture the workload on a database. (Task 1) Optionally export the AWR data. (Task 1) Restore the replay database on a test system. Make changes to the test system as required. Copy the workload to the test system. Preprocess the captured workload. (Task 2) Configure the test system for the replay. Replay the workload on the restored database. (Task 3) Database Replay Workflow in Enterprise Manager The following are the typical steps to perform Database Replay. Steps done with Enterprise Manager (EM) are marked as Task n. Other steps are not part of the EM workflow: 1. Capture the workload on a database. (Task 1) 2. Optionally export the AWR data. (Task 1) 3. Restore the replay database on a test system to match the capture database at the start of the workload capture. 4. Make changes (such as performing an upgrade) to the test system as required. 5. Copy the generated workload files to the test system. 6. Preprocess the captured workload on the test system. (Task 2) 7. Configure the test system for the replay. 8. Replay the workload on the restored database. (Task 3)
171
Capturing Workload with Enterprise Manager
Enterprise Manager (EM) provides you with a user interface to manage each component in the Database Replay process. The workflow and user interface applies to both EM Database Control and EM Grid Control. You access Database Replay on the “Software and Support” tab of Database Control. On the Database Replay page, you can perform the following named tasks: Capture Workload Preprocess Capture Workload Replay Workload View Workload Capture History: Click this link to view or delete the history of all workload captures. Active Capture and Replay: If a capture or replay is currently in progress, this table appears at the bottom of the page even if there are no rows. To view the status of the capture or replay, select the name and click View, or just click the Name link. RAC Note: When an instance goes down during capture of a RAC system, the capture continues normally and is not aborted. The sessions that died as a result of an instance going down will be replayed up to the point at which the instance died. When a dead instance is repaired and comes up again during capture, all new sessions are recorded normally. During replay, the death of instances is not replayed.
172
Using the Capture Wizard
On the “Capture Workload: Plan Environment” page, select each of the Acknowledge check boxes after you have met the prerequisites. If you do not handle the prerequisites, the following problems can occur: Not restarting the database could capture in-flight transactions, which may adversely affect the replay of subsequent captured transactions. The captured workload will be written to the file system and can use up all the available disk space. The chance of error and data divergence increases if application data does not match at the start of capture and replay. When you are finished, click Next to proceed with the rest of the wizard. Note: You can choose to restart the database before capture begins. Be sure that all other database activities can be temporarily stopped.
173
Using the Capture Wizard
Using the Capture Wizard (continued) On the “Capture Workload: Options” page, you can do the following: Choose whether to restart the database before starting capture. You do this in the Database Restart Options section. If the database is not restarted before capture begins, some database activities may not be captured accurately and completely. During replay, these incompletely captured activities may cause the replay database state to diverge from the capture database. The replay reports errors in these cases. Define workload filters in the Workload Filters section. Decide whether you want to include or exclude certain sessions, and then provide filter names, corresponding session attributes, and values. Selecting Inclusion as the filter mode captures only what you specify, and selecting Exclusion captures everything except what you specify. You can either include or exclude certain sessions, but you cannot do both at the same time. The default filter names exclude Enterprise Manager activities. The Value column represents the actual name for the session attribute, whereas you can provide any filter name desired. When you are finished, click Next to proceed with the rest of the wizard.
174
Using the Capture Wizard
Using the Capture Wizard (continued) On the “Capture Workload: Parameters” page, you can either provide your own required capture name or accept the system-supplied name. Select a directory object name from the list of existing directory objects defined in the system, or click Create Directory Object to specify a unique name and path for a new directory object. This directory is used to generate the capture files. Then click Next. This takes you to the “Capture Workload: Schedule” page, where you can either provide your own required job name or accept the system-supplied name. The job system automatically names the job in uppercase. The job and capture names do not have to match. If you schedule the job immediately and if you specified restarting the database in the Options step, the “Information: Restart Database” page appears after you submit the job in the Review step, and then the View Workload Capture page appears. If you schedule the job for a later time with or without a restart, the Database Replay page appears with a message notifying you that a job has been scheduled after you submit the job in the Review step. If you do not specify a capture duration, you must manually stop capture by clicking the Stop button on the Database Replay page while capture is in progress. You can also stop capture on the View Workload Capture page. Click Next to go to the Review page, where you can verify that the settings are as you intended before you submit the job. RAC Note: For RAC, the DBA should define the directory for captured data at a storage location accessible by all instances. Otherwise, the workload capture data needs to be copied from all locations to a single location before starting the processing of the workload recording.
175
Using the Capture Wizard
Using the Capture Wizard (continued) The slide illustrates the case in which you schedule the capture job immediately, do not specify restarting the database in the Options step, and do not indicate a capture duration. Capture is now in effect, and you must manually stop capture by clicking the Stop Capture button on the View Workload Capture page. At this point, you must run your workload.
176
Using the Capture Wizard
Using the Capture Wizard (continued) Use the View Workload Capture page to see the progress of a workload capture that you have initiated, or to see historical information about the capture after it has completed execution. This page is available from the “Active Capture and Replay” table on the Database Replay home page, from the View Capture History page (for a completed capture), or after a nonscheduled capture job is submitted. This page has the following elements: The Summary section gives you important information regarding the current workload capture. Use the Stop Capture button while capture is in progress. The Workload Profile subpage provides several performance-related totals for the capture. Click View Workload Capture Report to invoke a browser window to display the report. The report contains detailed information about the capture. The Comparison table compares the entire system with what has been captured. The Total column shows cumulative values for the system after you started the capture, and the Capture column shows the portion of these values that the capture yielded during the same period of time. The Workload Filters subpage shows the workload filters that you set up in the Options step of the Capture Workload Wizard. When your workload is finished, click Stop Capture.
177
Using the Capture Wizard
Using the Capture Wizard (continued) When you confirm that you want to stop your workload capture, you are asked whether you want to export AWR data. If you accept, the wizard generates an export job and automatically generates a dump file that contains AWR data for the capture period. This dump file is generated in the capture directory that you previously used to generate workload capture files. If you choose not to export AWR data or if the job is not yet done, the Export AWR Data button becomes available. When you click this button, a confirmation page is displayed. After confirmation, the export runs in the background and you return to the View Workload Capture page.
178
Viewing Workload Capture History
Use the View Workload Capture History page to: Obtain an overview of all completed workload capture jobs ever performed on this database, except for capture jobs that have been deleted Delete one or more entries from the list of completed workload capture jobs Access more detailed information about a workload capture job by selecting the capture name and clicking View (or clicking the Capture Name link) Export AWR data A green check mark appears for a capture name in the AWR Data Exported column if AWR data has already been exported. Otherwise, the column has an X. To export data for a capture name with an X, select the name and click Export AWR Data. When the export is finished, a check mark appears in the column.
179
Processing Captured Workload
Use the Preprocess Captured Workload page to: Select a directory object that contains a captured workload View historical information about the completed capture before you begin to preprocess the workload. This page contains the Capture Summary and Capture Details sections. The Workload Profile and Workload Filters subpages are accessed in the Capture Details section. When you have selected your directory and reviewed the capture details, click Preprocess Workload to invoke the Preprocess Captured Workload Wizard. RAC Note: In a RAC setup, one database instance of the preprocessing system is selected for processing the workload recording. If recorded data was written to a local file system for nodes in the RAC, the recorded data files from all the nodes in the RAC should first be copied to the directory for the instance on which the preprocessing is to be done. If the captured data is stored in a shared file system, copying is not necessary.
180
Using the Preprocess Captured Workload Wizard
The slide describe the preprocess flow: Database Version: This step reminds you that the current database version must be the same as the database where you want to replay the captured workload. If the database versions are not the same, you can continue to the next step but the preprocessed workload may not replay correctly on a later major version of the database. Schedule: This page is a typical Enterprise Manager schedule page on which you must specify a job name, credentials, and a start time. Review: On this page, review the information you provided and click Submit to trigger the preprocessing job. This takes you back to the Database Replay home page, where you can see a Confirmation message pointing to the generated job.
181
Using the Replay Workload Wizard
You are now again on the Database Replay home page. Click the Replay Workload icon to access the Replay Workload page, where you can: Specify the directory object that contains the preprocessed workload View capture information about the preprocessed workload View the replay history, if any, of the preprocessed workload This page contains the Capture Summary and Capture Details sections. The Workload Profile and Workload Filters subpages are accessed in the Capture Details section. When ready, click Set Up Replay.
182
Using the Replay Workload Wizard
Using the Replay Workload Wizard (continued) Before entering the replay phase, the following items should be completed: Restore Database: Restore the replay database to match the capture database at the start of the workload capture. A successful workload replay depends on the application transactions accessing the application data identical to that on the capture system. Common ways to restore application data state include point-in-time recovery, flashback, and import/export. Perform System Changes: You should make any desired changes to the replay system, including any database or system upgrade, prior to replay. The primary purpose of Database Replay is to test the effect of system changes on a real captured application workload. Therefore, the changes you make, combined with the captured workload, define the test you are conducting. Resolve References to External Systems: A captured workload may contain references to external systems that may be meaningful only in the capture environment. For example, database links in a captured production environment may reference external production databases that should not be referenced during replay. In such a case, you should modify any external references that could jeopardize the production environment during replay. The “Replay Workload: References to External Systems” page helps you resolve these issues. Set Up Replay Clients: The replay client is a multithreaded program (an executable named wrc) where each thread submits a captured session’s workload. This program is made available as part of the standard Oracle Client as well as the Oracle Instant Client. You should install one or more replay clients, preferably on systems other than the database host. In addition, each replay client must be able to access the directory that holds the preprocessed workload.
183
Using the Replay Workload Wizard
Using the Replay Workload Wizard (continued) On the “Replay Workload: Choose Initial Options” page, accept the default Replay Name, or provide your own. You can either select the default replay or use replay options from a previous replay. Initial replay options refer to the connection mappings and parameters on the Customize Options page. Connections are captured along with the workload. On the “Replay Workload: Customize Options” page, you can use the Connection Mappings sub-page to conveniently use a single descriptor for all connections, either as a prepopulated string or as an alias name that maps to the descriptor string. You can also select a separate connect descriptor option that enables you to specify a unique descriptor for each connection. If you selected the “Use replay options from a previous replay” option in the Choose Initial Options step, the “Use a separate connect descriptor” option is selected and the previous replay system values appear in the table. RAC Note: The remapping of external interactions should include the remapping of instances. In particular, every captured connection string probably needs to be remapped to a connection string in the replay system. If the capture system is a single instance database and the replay system is also a single instance database, the remapping of the connection string is straightforward and involves adding the appropriate entry to the configuration file. The same is valid when both the capture and the replay systems are RAC databases with the same number of nodes. Remapping becomes more complicated if the capture and replay systems have different numbers of nodes.
184
Using the Replay Workload Wizard
Using the Replay Workload Wizard (continued) The Replay Parameters subpage provides advanced parameters that control some aspects of the replay. You saw those parameters on the “Replay Options” slide in this lesson.
185
Using the Replay Workload Wizard
Using the Replay Workload Wizard (continued) Workload is replayed using replay clients connected to the database. When you reach the “Replay Workload: Prepare Replay Clients” page, you should be ready to start the replay clients. Click Next. This takes you to the “Replay Workload: Wait for Client Connections” page. The text below the clock changes if a replay client is connected. The Client Connections table is populated when at least one replay client is connected.
186
Using the Replay Workload Wizard
$ wrc REPLAYDIR=/home/oracle/solutions/dbreplay USERID=system PASSWORD=oracle Workload Replay Client: Release Production on Tue … Copyright (c) 1982, 2007, Oracle. All rights reserved. Wait for the replay to start (21:47:01) $ wrc REPLAYDIR=/home/oracle/solutions/dbreplay USERID=system PASSWORD=oracle Workload Replay Client: Release Production on Tue … Copyright (c) 1982, 2007, Oracle. All rights reserved. Wait for the replay to start (21:47:01) Using the Replay Workload Wizard (continued) The Replay Workload Wizard waits for you to start the replay clients. You open separate terminal windows to start the replay clients by using the wrc executable. You can start multiple replay clients depending on the workload replay size. Each of the clients initiates one or more replay threads with the database, with each replay thread corresponding to a stream from the workload capture. Here is a brief description of the syntax used by wrc. The userid and password parameters are the user ID and password of the replay user for the client. The server parameter is a connection string that connects to the instance of the replay system. The replaydir parameter points to the directory that contains the processed replay files. The workdir parameter defines the client’s working directory; if left unspecified, it defaults to the current directory. Check the following before starting the replay clients: The replay client software is installed on the hosts. The client has access to the replay directory. The replay directory has the replay files that have been preprocessed. The userid and password for the replay user are correct. Furthermore, the user should be able to use the workload replay package and should have the user SWITCH privilege. The Client Connections table is populated when at least one replay client is connected. After you start replay clients, click Next on the “Wait for Client Connections” page. Note: By default, replay is the wrc mode.
187
Using the Replay Workload Wizard
$ wrc REPLAYDIR=/home/oracle/solutions/dbreplay USERID=system PASSWORD=oracle Workload Replay Client: Release Production on Tue … Copyright (c) 1982, 2007, Oracle. All rights reserved. Wait for the replay to start (21:47:01) Replay started (21:48:14) Using the Replay Workload Wizard (continued) At this point, replay clients are waiting for the database to start the replay. On the “Replay Workload: Review” page, click Submit to enter replay PREPARE mode. The replay clients now start replaying the captured workload.
188
Viewing Workload Replay Statistics
A progress window provides comparison statistics as the replay progresses. You can terminate the replay at any stage with the Stop Replay button.
189
Viewing Workload Replay Statistics
$ wrc REPLAYDIR=/home/oracle/solutions/dbreplay USERID=system PASSWORD=oracle Workload Replay Client: Release Production on Tue … Copyright (c) 1982, 2007, Oracle. All rights reserved. Wait for the replay to start (21:47:01) Replay started (21:48:14) Replay finished (21:51:21) $ Viewing Workload Replay Statistics (continued) On successful completion of the replay, the terminal window that started the replay clients displays the message “Replay finished,” followed by a timestamp. The replayed workload is now complete. The Elapsed Time Comparison chart shows how much time the replayed workload has taken to accomplish the same amount of work as the captured workload. The divergence table gives information about both the data and error discrepancies between the replay and capture environments, which can be used as a measure of the replay quality. RAC Note: If a specific captured instance is mapped to a new instance in the replay system, all the captured calls for the captured instances are sent to the new instance. If the replay system is also RAC and a captured instance is mapped to the run-time load balancing of the replay system, all captured calls for that recorded instance are dynamically distributed to instances in the replay RAC system using run-time load balancing.
190
Viewing Workload Replay Statistics
Viewing Workload Replay Statistics (continued) You can click the View Workload Replay Report button (or Report tab after replay has completed) to see a browser window that displays a report containing detailed information about the replay. The Report subpage contains several workload performance reports: Workload Replay Report AWR Compare Period Report AWR Report ASH Report
191
Packages and Procedures
DBMS_WORKLOAD_CAPTURE START_CAPTURE FINISH_CAPTURE ADD_FILTER DELETE_FILTER DELETE_CAPTURE_INFO GET_CAPTURE_INFO() EXPORT_AWR IMPORT_AWR() REPORT() DBMS_WORKLOAD_REPLAY PROCESS_CAPTURE INITIALIZE_REPLAY PREPARE_REPLAY START_REPLAY CANCEL_REPLAY DELETE_REPLAY_INFO REMAP_CONNECTION EXPORT_AWR IMPORT_AWR GET_REPLAY_INFO REPORT Packages and Procedures You need the EXECUTE privilege on the capture and replay packages to execute these packages. These privileges are usually assigned by the DBA. Note: For further details about the DBMS_WORKLOAD packages, see the Oracle Database PL/SQL Packages and Types Reference 11g, Release 1.
192
Data Dictionary Views: Database Replay
DBA_WORKLOAD_CAPTURES: Lists all the workload captures performed in the database DBA_WORKLOAD_FILTERS: Lists all the workload filters defined in the database DBA_WORKLOAD_REPLAYS: Lists all the workload replays that have been performed in the database DBA_WORKLOAD_REPLAY_DIVERGENCE: Is used to monitor workload divergence DBA_WORKLOAD_CONNECTION_MAP: Is used to review all connection strings that are used by workload replays V$WORKLOAD_REPLAY_THREAD: Monitors the status of external replay clients Data Dictionary Views: Database Replay For information about these views, see the Oracle Database Reference.
193
Database Replay: PL/SQL Example
exec DBMS_WORKLOAD_CAPTURE.ADD_FILTER(fname => 'sessfilt', fattribute => USER , fvalue => 'JFV'); exec DBMS_WORKLOAD_CAPTURE.START_CAPTURE(name => 'june_peak', dir => 'jun07'); Execute your workload exec DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE(); exec DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir => 'jun07'); wrc userid=system password=oracle replaydir=/dbreplay exec DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(replay_name => 'j_r' , replay_dir => 'jun07'); Database Replay: PL/SQL Example In this example, the ADD_FILTER procedure adds a filter named sessfilt that will filter out all sessions belonging to the user name JFV. To start the workload capture, use the START_CAPTURE procedure. In this example, a capture named june_peak is captured and stored in a directory named jun07. Because the duration parameter is not specified, the workload capture continues until the FINISH_CAPTURE procedure is called. At this point, you can run your workload. To stop the workload capture, use the FINISH_CAPTURE procedure. This procedure finalizes the workload capture and returns the database to a normal state. You can now generate a capture report by using the REPORT function. To preprocess a captured workload, use the PROCESS_CAPTURE procedure. In this example, the captured workload stored in the jun07 directory is preprocessed. When finished, you can start your replay clients. To initialize replay data, use the INITIALIZE_REPLAY procedure. Initializing replay data loads the necessary metadata into tables required by the workload replay. For example, captured connection strings are loaded into a table where they can be remapped for replay. In this example, the INITIALIZE_REPLAY procedure loads preprocessed workload data from the jun07 directory into the database.
194
Database Replay: PL/SQL Example
exec DBMS_WORKLOAD_REPLAY.REMAP_CONNECTION(connection_id => 101, replay_connection => 'edlin44:3434/bjava21'); exec DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY(synchronization => TRUE, think_time_scale=> 2); exec DBMS_WORKLOAD_REPLAY.START_REPLAY (); DECLARE cap_id NUMBER; rep_id NUMBER; rep_rpt CLOB; BEGIN cap_id := DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO(dir => 'jun07'); /* Get the latest replay for that capture */ SELECT max(id) INTO rep_id FROM dba_workload_replays WHERE capture_id = cap_id; rep_rpt := DBMS_WORKLOAD_REPLAY.REPORT(replay_id => rep_id, format => DBMS_WORKLOAD_REPLAY.TYPE_TEXT); END; Database Replay: PL/SQL Example (continued) To remap connections, use the REMAP_CONNECTION procedure. In this example, the connection that corresponds to the connection ID 101 will use the new connection string defined by the replay_connection parameter. To prepare workload replay on the replay system, use the PREPARE_REPLAY procedure. In this example, the PREPARE_REPLAY procedure prepares the j_r replay to preserve the COMMIT order in the workload capture. To start a workload replay, use the START_REPLAY procedure. To stop a workload replay, use the REPLAY_CANCEL procedure To generate a workload replay report, use the REPORT function as shown in the slide.
195
Calibrating Replay Clients
$ wrc mode=calibrate replaydir=/dbreplay Workload Replay Client: Release Beta on Tue … Copyright (c) 1982, 2007, Oracle. All rights reserved. Report for Workload in: /dbreplay Recommendation: Consider using at least 1 clients divided among 1 CPU(s). Workload Characteristics: - max concurrency: 4 sessions - total number of sessions: 11 Assumptions: - 1 client process per 50 concurrent sessions - 4 client process per CPU - think time scale = 100 - connect time scale = 100 - synchronization = TRUE $ Calibrating Replay Clients Because one replay client can initiate multiple sessions with the database, it is not necessary to start a replay client for each session that was captured. The number of replay clients that need to be started depends on the number of workload streams, the number of hosts, and the number of replay clients for each host. For example, suppose that a workload capture has 1,000 streams, that the number of average active sessions from the workload capture is about 60, and that one host can drive only 50 concurrent connections to the database. You should use two hosts, each with a replay client. To estimate the number of replay clients and hosts that are required to replay a particular workload, run the wrc executable in calibrate mode. In calibration mode, the wrc executable accepts the following parameters: replaydir specifies the directory that contains the preprocessed workload capture that you want to replay. If unspecified, it defaults to the current directory. process_per_cpu specifies the maximum number of client processes that can run for each CPU. The default value is 4. threads_per_process specifies the maximum number of threads that can run in a client process. The default value is 50. The example in the slide shows how to run the wrc executable in the calibrate mode. In this example, the wrc executable is executed to estimate the number of replay clients and hosts that are required to replay the workload capture stored in the directory named replay. Note: To list the hosts that participated in the capture, use the list_hosts mode.
196
Summary In this lesson, you should have learned how to:
Identify the benefits of using Database Replay List the steps involved in Database Replay Use Enterprise Manager to record and replay workloads
197
Practice 5: Overview This practice covers using Database Replay with Enterprise Manager in the following scenarios: Replaying in synchronous mode without changes Replaying in synchronous mode after changes are applied Replaying in nonsynchronous mode without changes
198
Automatic SQL Tuning
199
Objectives After completing this lesson, you should be able to:
Set up and modify Automatic SQL Tuning Use the PL/SQL interface to perform fine tuning View and interpret reports generated by Automatic SQL Tuning
200
SQL Tuning in Oracle Database 10g
Workload 1 SQL Tuning Advisor 4 High load Accept profiles Automatic Generate SQL profiles 2 ADDM DBA SQL Tuning in Oracle Database 10g Oracle Database 10g introduced SQL Tuning Advisor to help DBAs and application developers improve the performance of SQL statements. The advisor targets the problem of poorly written SQL, in which SQL statements have not been designed in the most efficient fashion. It also targets the (more common) problem in which a SQL statement is performing poorly because the optimizer generated a poor execution plan due to a lack of accurate and relevant data statistics. In all cases, the advisor makes specific suggestions for speeding up SQL performance, but it leaves the responsibility of implementing the recommendations to the user. In addition to SQL Tuning Advisor, Oracle Database 10g has an automated process to identify high-load SQL statements in your system. This is done by the Automatic Database Diagnostic Monitor (ADDM), which automatically identifies high-load SQL statements that are good candidates for tuning. However, major issues still remain: Although it is true that ADDM identifies some SQL that should be tuned, users must manually look at ADDM reports and run SQL Tuning Advisor on those reports for tuning. 3 Run SQL Tuning Advisor
201
Automatic SQL Tuning in Oracle Database 11g
AWR 2 Top SQL Auto matic Workload 3 1 SQL Tuning Reports 4 Automatic SQL Tuning in Oracle Database 11g Oracle Database 11g further automates the SQL Tuning process by identifying problematic SQL statements, running SQL Tuning Advisor on them, and implementing the resulting SQL profile recommendations to tune the statement without requiring user intervention. Automatic SQL Tuning uses the AUTOTASK framework through a new task called “Automatic SQL Tuning” that runs every night by default. Here is a brief description of the automated SQL tuning process in Oracle Database 11g: Step 1: Based on the AWR Top SQL identification (SQLs that were top in four different time periods: the past week, any day in the past week, any hour in the past week, or single response time), Automatic SQL Tuning targets for automatic tuning. Steps 2 and 3: While the Automatic SQL Tuning task is executing during the maintenance window, the previously identified SQL statements are automatically tuned by invoking SQL Tuning Advisor. As a result, SQL profiles will be created for them if needed. However, before making any decision, the new profile is carefully tested. Step 4: At any point in time, you can request a report about these automatic tuning activities. You then have the option of checking the tuned SQL statements to validate or remove the automatic SQL profiles that were generated. DBA
202
Summary of Automation in Oracle Database 11g
Task runs automatically (AUTOTASK framework) Workload automatically chosen (no SQL Tuning Set) SQL profiles automatically tested SQL profiles automatically implemented SQLs automatically retuned if they regress Reporting available over any time period
203
Selecting Potential SQL Statements for Tuning
AWR Weekly Daily Hourly Avrg execution Candidate list Pull the top queries from the past week into four buckets: Top for the past week Top for any day in the past week Top in any single snapshot Top by average single execution Combine four buckets into one (assigning weights). Cap at 150 queries per bucket. Selecting Potential SQL Statements for Tuning Oracle Database 11g analyzes the statistics in the AWR and generates a list of potential SQL statements that are eligible for tuning. These statements include repeating high-load statements that have a significant impact on the system. Only SQL statements that have an execution plan with a high potential for improvement will be tuned. Recursive SQL and statements that have been tuned recently (in the last month) are ignored, as are parallel queries, DMLs, DDLs, and SQL statements with performance problems that are caused by concurrency issues. The SQL statements that are selected as candidates are then ordered based on their performance impact. The performance impact of a SQL statement is calculated by summing the CPU time and the I/O times captured in the AWR for that SQL statement in the past week.
204
Maintenance Window Timeline
Automatic SQL Tuning task Pick candidate SQL … Tune S1 Test P1 Accept P1 Tune S2 … One hour maximum (by default) Maintenance window Maintenance Window Timeline The Automatic SQL Tuning process takes place during the maintenance window. Furthermore, it runs as part of a single AUTOTASK job on a single instance to avoid concurrency issues. This is portrayed in the slide for one scenario. In this scenario, at some time after the beginning of the maintenance window, AUTOTASK starts the Automatic SQL Tuning job (SYS_AUTO_SQL_TUNING_TASK). The first thing that the job does is to generate a list of candidate SQL for tuning, according to the AWR source. When the list is complete, the job tunes each statement in order of importance, one after another, considering only one statement at a time. In this scenario, it first tunes S1, which has a SQL profile recommendation (P1) generated for it by SQL Tuning Advisor. After P1 has been successfully tested, it is accepted and the job moves on to the next statement, S2. By default, Automatic SQL Tuning runs for at most one hour during a maintenance window. You can change this setting by using a call similar to the following: dbms_sqltune.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'TIME_LIMIT', 7200); Note: The widths of boxes in the slide do not indicate relative execution times. Tuning and test execution should be the most expensive processes by far, with all the others completing relatively quickly.
205
Automatic Tuning Process
Not considered Restructure SQL New SQL profile Existing profile? N 3X benefit? Y Accept profile Indexes N Y 3X benefit? N Ignore new profile Stale stats Y Replace profile Automatic Tuning Process With the list of candidate SQL already built and ordered, the statements are then tuned using SQL Tuning Advisor. During the tuning process, all the recommendation types are considered and reported, but only SQL profiles can be implemented automatically (when the ACCEPT_SQL_PROFILES task parameter is set to TRUE). Otherwise, only the recommendation to create a SQL profile will be reported in the automatic SQL tuning reports. In Oracle Database 11g, the performance improvement factor has to be at least three before a SQL profile is implemented. As we have already mentioned, the Automatic SQL Tuning process implements only SQL profile recommendations automatically. Other recommendations (to create new indexes, refresh stale statistics, or restructure SQL statements) are generated as part of the SQL tuning process but are not implemented. These are left for the DBA to review and implement manually, as appropriate. Here is a short description of the general tuning process: Tuning is performed on a per-statement basis. Because only SQL profiles can be implemented, there is no need to consider the effect of such recommendations on the workload as a whole. For each statement (in order of importance), the tuning process carries out each of the following steps: 1. Tune the statement by using SQL Tuning Advisor. Look for a SQL profile and, if it is found, verify that the base optimizer statistics are current for it. GATHER_STATS_JOB
206
Notes only page Automatic Tuning Process (continued)
2. If a SQL profile is recommended, perform the following: Test the new SQL profile by executing the statement with and without it. When a SQL profile is generated and it causes the optimizer to pick a different execution plan for the statement, the advisor must decide whether to implement the SQL profile. It makes its decision according to the flowchart in the slide. Although the benefit thresholds here apply to the sum of CPU and I/O time, SQL profiles are not accepted when there is degradation in either statistic. So the requirement is that there is a three-times improvement in the sum of CPU and I/O time, with neither statistic becoming worse. In this way, the statement runs faster than it would without the profile, even with contention in CPU or I/O. 3. If stale or missing statistics are found, make this information available to GATHER_STATS_JOB. Automatic implementation of tuning recommendations is limited to SQL profiles because they have fewer risks. It is easy for a DBA to revert the implementation. Note: All SQL profiles are created in the standard EXACT mode. They are matched and tracked according to the current value of the CURSOR_SHARING parameter. DBAs are responsible for setting CURSOR_SHARING appropriately for their workload. Notes only page
207
DBA Controls Autotask configuration: Task parameters: On/off switch
Maintenance windows running tuning task CPU resource consumption of tuning task Task parameters: SQL profile implementation automatic/manual switch Global time limit for tuning task Per-SQL time limit for tuning task Test-execute mode disabled to save time Maximum number of SQL profiles automatically implemented per execution as well as overall Task execution expiration period DBA Controls Here is a PL/SQL control example for the Automatic SQL Tuning task: BEGIN dbms_sqltune.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'LOCAL_TIME_LIMIT', 1400); dbms_sqltune.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'ACCEPT_SQL_PROFILES', 'TRUE'); dbms_sqltune.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'MAX_SQL_PROFILES_PER_EXEC', 50); dbms_sqltune.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'MAX_AUTO_SQL_PROFILES', 10002); END; The last three parameters in this example are supported only for the Automatic SQL Tuning task. You can also use parameters such as LOCAL_TIME_LIMIT, or TIME_LIMIT, which are valid parameters for the traditional SQL tuning tasks. One important example is to disable test-execute mode (to save time) and to use only execution plan costs to decide about the performance by using the TEST_EXECUTE parameter. In addition, you can control when the Automatic SQL Tuning task runs and the CPU resources that it is allowed to use.
208
Automatic SQL Tuning Task
As has already been mentioned, Automatic SQL Tuning is implemented as an automated maintenance task that is itself called Automatic SQL Tuning. You can see some high-level information about the last runs of the Automatic SQL Tuning task by going to the Automated Maintenance Tasks page: On your Database Control home page, click the Server tab and, when you are on the Server page, click the Automated Maintenance Tasks link in the Tasks section. On the Automated Maintenance Tasks page, you see the predefined tasks. You then access each task by clicking the corresponding link to get more information about the task itself (illustrated in the slide). When you click either the Automatic SQL Tuning link or the latest execution icon (the green area on the timeline), you go to the Automatic SQL Tuning Result Summary page.
209
Configuring Automatic SQL Tuning
You can configure various Automatic SQL Tuning parameters by using the Automatic SQL Tuning Settings page. To get to that page, click the Configure button on the Automated Maintenance Tasks page. This takes you to the Automated Maintenance Tasks Configuration page, where you can see the various maintenance windows that are delivered with Oracle Database 11g. By default, Automatic SQL Tuning executes on all predefined maintenance windows in the MAINTENANCE_WINDOW_GROUP. You can disable it for specific days in the week. On this page, you can also edit each Window to change its characteristics. You can do so by clicking Edit Window Group. To get to the Automatic SQL Tuning Settings page, click the Configure button on the line corresponding to Automatic SQL Tuning in the Task Settings section. On the Automatic SQL Tuning Settings page, you can specify the parameters shown in the slide. By default, “Automatic Implementation of SQL Profiles” is deactivated. Note: If you set STATISTICS_LEVEL to BASIC, turn off the AWR snapshots by using DBMS_WORKLOAD_REPOSITORY, or if AWR retention is less than seven days, you also stop Automatic SQL Tuning.
210
Automatic SQL Tuning Result Summary
In addition, the Automatic SQL Tuning Result Summary page contains various summary graphs so that you can control the Automatic SQL Tuning task. An example is given in the slide. The first chart in the Overall Task Statistics section shows you the breakdown by finding types for the designated period of time. You can control the period of time for which you want the report to be generated by selecting a value from the Time Period list. In the example, Customized is used; it shows you the latest run. You can choose All to cover all executions of the task so far. Users can request it for any time period over the past month, since that is the amount of time for which the advisor persists its tuning history. You then generate the report by clicking View Report. On the “Breakdown by Finding Type” chart, you can clearly see that only SQL profiles can be implemented. Although many more profiles were recommended, not all of them were automatically implemented for the reasons that we already explained. Similarly, recommendations for index creation and other types are not implemented. However, the advisor keeps historical information about all the recommendations if you want to implement them later. In the Profile Effect Statistics section, you can see the Tuned SQL DB Time Benefit chart, which shows you the before-and-after DB Time for implemented profiles and other recommendations.
211
Automatic SQL Tuning: Result Details
On the Automatic SQL Tuning Result Details page, you can also see a variety of important information for each automatically tuned SQL statement, including its SQL text and SQL ID, the type of recommendation that was done by SQL Tuning Advisor, the verified benefit percentage, whether a particular recommendation was automatically implemented, and the date of the recommendation. From this page, you can either drill down to the SQL statement itself by clicking its corresponding SQL ID link, or you can select one of the SQL statements and click the View Recommendations button to have more details about the recommendation for that statement. Note: The benefit percentage shown for each recommendation is calculated using the formula bnf% = (time_old - time_new)/(time_old). With this formula, you can see that a three-time benefit (for example, time_old = 100, time_new = 33) corresponds to 66%. So the system implements any profiles with benefits over 66%. According to this formula, 98% is a 50 times benefit.
212
Automatic SQL Tuning Result Details: Drilldown
On the “Recommendations for SQL ID” page, you can see the corresponding recommendations and implement them manually. By clicking the SQL Test link, you access the SQL Details page, where you see the tuning history as well as the plan control associated with your SQL statement. In the slide, you see that the statement was tuned by Automatic SQL Tuning and that the associated profile was automatically implemented.
213
Automatic SQL Tuning: Fine Tune
Use DBMS_SQLTUNE: SET_TUNING_TASK_PARAMETER EXECUTE_TUNING_TASK REPORT_AUTO_TUNING_TASK Use DBMS_AUTO_TASK_ADMIN: ENABLE DISABLE Dictionary views: DBA_ADVISOR_EXECUTIONS DBA_ADVISOR_SQLSTATS DBA_ADVISOR_SQLPLANS Automatic SQL Tuning: Fine Tune You can use the DBMS_SQLTUNE PL/SQL package to control various aspects of SYS_AUTO_SQL_TUNING_TASK. 1. SET_TUNING_TASK_PARAMETERS: The following parameters are supported for the automatic tuning task only: ACCEPT_SQL_PROFILES: TRUE/FALSE whether the system should accept SQL profiles automatically REPLACE_USER_SQL_PROFILES: TRUE/FALSE whether the task should replace SQL profiles created by the user MAX_SQL_PROFILES_PER_EXEC: Maximum number of SQL profiles to create for each run MAX_AUTO_SQL_PROFILES: Maximum number of automatic SQL profiles allowed on the system in total EXECUTION_DAYS_TO_EXPIRE: Specifies the number of days to save the task history in the advisor framework schema. By default, the task history is saved for 30 days before it expires. 2. EXECUTE_TUNING_TASK function: Used to manually run a new execution of the task in the foreground (behaves in the same way that it runs in the background) 3. REPORT_AUTO_TUNING_TASK: Get a text report covering a range of task executions You can enable and disable SYS_AUTO_SQL_TUNING_TASK by using the DBMS_AUTO_TASK_ADMIN PL/SQL package.
214
Notes only page Automatic SQL Tuning: Fine Tune (continued)
You can also access Automatic SQL Tuning information through the dictionary views listed in the slide: DBA_ADVISOR_EXECUTIONS: Get data about each execution of the task DBA_ADVISOR_SQLSTATS: See the test-execute statistics generated while testing the SQL profiles DBA_ADVISOR_SQLPLANS: See the plans encountered during test-execute Notes only page
215
Using the PL/SQL Interface to Generate Reports
SQL> variable my_rept CLOB; SQL> BEGIN 2 :my_rept :=DBMS_SQLTUNE.REPORT_AUTO_TUNING_TASK(begin_exec=>NULL, end_exec=>NULL, type=>'TEXT', level=>'TYPICAL', section=>'ALL', object_id=>NULL, result_limit=>NULL); END; / … SQL> print :my_rept MY_REPT GENERAL INFORMATION SECTION Tuning Task Name : SYS_AUTO_SQL_TUNING_TASK Tuning Task Owner : SYS Workload Type : Automatic High-Load SQL Workload Scope : COMPREHENSIVE Global Time Limit(seconds) : 3600 Per-SQL Time Limit(seconds) : 1200 … Number of SQLs with Statistic Findings : 1 Number of SQLs with SQL profiles recommended : 2 Number of SQLs with SQL profiles implemented : 2 Number of SQLs with Index Findings : 1 Number of SQLs with Errors : 1 Using the PL/SQL Interface to Generate Reports The example in the slide shows how to generate a text report to display all SQL statements that were analyzed in the most recent execution, including recommendations that were not implemented. All sections of the report are included. Depending on the sections that were included in the report, you can view information about the automatic SQL tuning task in the following sections of the report: The general information section provides a high-level description of the automatic SQL tuning task, including information about the inputs given for the report, the number of SQL statements tuned during the maintenance, and the number of SQL profiles that were created. The summary section lists the SQL statements (by their SQL identifiers) that were tuned during the maintenance window and the estimated benefit of each SQL profile, or their actual execution statistics after test-executing the SQL statement with the SQL profile. The Tuning findings section gives you all findings and statistics that are associated with each SQL statement, as well as whether profiles were accepted (and why). The Explain plans section shows the old and new explain plans used by each SQL statement analyzed by SQL Tuning Advisor. The Errors section lists all errors encountered by the Automatic SQL Tuning task. Note: Obtain the execution list by using the following command: select execution_name,status,execution_start,execution_end from dba_advisor_executions where task_name='SYS_AUTO_SQL_TUNING_TASK';
216
Automatic SQL Tuning Considerations
SQL not considered for Automatic SQL Tuning: Ad hoc or rarely repeated SQL Parallel queries Long-running queries after profiling Recursive SQL statements DML and DDL These statements can still be manually tuned by using SQL Tuning Advisor. Automatic SQL Tuning Considerations Automatic SQL Tuning does not seek to solve every SQL performance issue occurring on a system. It does not consider the following types of SQL. Ad hoc or rarely repeated SQL: If a SQL is not executed multiple times in the same form, the advisor ignores it. SQL that do not repeat within a week are also not considered. Parallel queries. Long-running queries (post-profile): If a query takes too long to run after being SQL profiled, it is not practical to test-execute and, therefore, it is ignored by the advisor. Note that this does not mean that the advisor ignores all long-running queries. If the advisor can find a SQL profile that causes a query that once took hours to run in minutes, it could be accepted because test-execution is still possible. The advisor would execute the old plan just long enough to determine that it is worse than the new one, and then would terminate test-execution without waiting for the old plan to finish, thereby switching the order of execution. Recursive SQL statements DMLs such as INSERT SELECT or CREATE TABLE AS SELECT With the exception of truly ad hoc SQL, these limitations apply to Automatic SQL Tuning only. Such statements can still be tuned by manually running SQL Tuning Advisor.
217
Summary In this lesson, you should have learned how to:
Set up and modify Automatic SQL Tuning Use the PL/SQL interface to perform fine tuning View and interpret reports generated by Automatic SQL Tuning
218
Practice 6: Overview This practice covers using Automatic SQL Tuning.
219
Intelligent Infrastructure Enhancements
220
Objectives After completing this lesson, you should be able to:
Create AWR baselines for future time periods Identify the views that capture foreground statistics Control automated maintenance tasks Use Resource Manager I/O calibration Use lightweight jobs with the Scheduler
221
AWR Baselines
222
Comparative Performance Analysis with AWR Baselines
AWR Baseline contains a set of AWR snapshots for an “interesting or reference” period of time Baseline is key for performance tuning to: Guide setting of alert thresholds Monitor performance Compare advisor reports Actual Normal AWR Baseline Comparative Performance Analysis with AWR Baselines What is the proper threshold to set on a performance metric? What is it that you want to detect? If you want to know that the performance metric value indicates that the server is nearing capacity, an absolute value is correct. But if you want to know that the performance is different today than it was at this time last week, or last month, the current performance must be compared to a baseline. A baseline is a set of snapshots taken over a period of time. These snapshots are grouped statistically to yield a set of baseline values that vary over time. For example, the number of transactions per second in a certain database varies depending on the time of the day. The values for transactions per second are higher during working hours and lower during nonworking hours. The baseline records this variation and can be set to alert you if the current number of transactions per second is significantly different from the baseline values. Oracle Database 11g baselines provide the data required to calculate time-varying thresholds based on the baseline data. The baseline allows a real-time comparison of performance metrics with baseline data and can be used to produce AWR reports that compare two periods. time
223
Automatic Workload Repository Baselines
Oracle Database 11g further enhances the Automatic Workload Repository baselines: Out-of-the-box moving window baselines for which you can specify adaptive thresholds Schedule the creation of a baseline by using baseline templates. Rename baselines. Set expiration dates for baselines. Automatic Workload Repository Baselines Oracle Database 11g consolidates the various concepts of baselines in Oracle (specifically in Enterprise Manager and RDBMS) into the single concept of the Automatic Workload Repository (AWR) baseline. Oracle Database 11g AWR baselines provide powerful capabilities for defining dynamic and future baselines, and considerably simplify the process of creating and managing performance data for comparison purposes. Oracle Database 11g introduces the concept of moving window baselines. By default, a system-defined moving window baseline is created that corresponds to all the AWR data within the AWR retention period. Oracle Database 11g provides the ability to collect two kinds of baselines: moving window and static baselines. Static baselines can be either single or repeating. A single AWR baseline is collected over a single time period. A repeating baseline is collected over a repeating time period (for example, every Monday in June). In Oracle Database 11g, baselines are enabled by default if STATISTICS_LEVEL=TYPICAL or ALL.
224
Moving Window Baseline
There is one moving window baseline: SYSTEM_MOVING_WINDOW: A moving window baseline that corresponds to the last eight days of AWR data Created out-of-the-box in 11g By default, the adaptive thresholds functionality computes statistics on this baseline. Moving Window Baseline Oracle Database automatically maintains a system-defined moving window baseline. The default window size for the system-defined moving window baseline is the current AWR retention period, which by default is eight days. If you are planning to use adaptive thresholds, consider using a larger moving window (such as 30 days) to accurately compute threshold values. You can resize the moving window baseline by changing the number of days in the moving window to a value that is equal to or less than the number of days in the AWR retention period. Therefore, to increase the size of a moving window, you first need to increase the AWR retention period accordingly. This system-defined baseline provides a default out-of-the-box baseline for EM performance screens to compare the performance with the current database performance. Note: The default retention period for snapshot data has been changed from seven days to eight days in Oracle Database 11g to ensure the capture of an entire week of performance data.
225
Baselines in Performance Page Settings
The data for any defined baseline in the past is available in Oracle Database 11g. The baseline data may be displayed on the Performance page of Enterprise Manager. You have three display options: Do not show baseline information. Show the information from a specified static baseline. Show the information from the system moving baseline. Note: The system moving window baseline becomes valid after sufficient data has been collected and the statistics calculation occurs. By default, the statistics calculation is scheduled for every Saturday at midnight.
226
Baseline Templates Templates enable you to schedule the creation of baselines for any time period(s) of interest in the future: Single time period in the future Repeating schedule Examples A known holiday weekend Every Monday morning from 10 AM to 2 PM When the end time for a baseline template changes from future to past, MMON detects the change and creates the baseline. Baseline Templates Creating baselines for future time periods allows you to mark time periods that you know will be interesting. For example, you may want the system to automatically generate a baseline for every Monday morning for the whole year, or you can ask the system to generate a baseline for an upcoming holiday weekend if you suspect that it is a high-volume weekend. Previously, you could create baselines only on snapshots that already existed. With Oracle Database 11g, a nightly MMON task goes through all the templates for baseline generation and checks to see if any time ranges have changed from the future to the past within the last day. For the relevant time periods, the MMON task then creates a baseline for the time period.
227
Creating AWR Baselines
You can create two types of AWR baselines: single and repeating. The “Create Baseline: Baseline Interval Type” page gives the following explanations. The single type of baseline has a single and fixed time interval: for example, from Jan 1, 2007, at 10:00 AM, to Jan 1, 2007, at 12:00 PM. The repeating type of baseline has a time interval that repeats over a time period: for example, every Monday from 10:00 AM to 12:00 PM for the year 2007. To view the AWR Baseline page, click the AWR Baselines link on the Server tab of the Database Instance page. On the Baseline page, click Create and follow the wizard to create your baseline. Note: Before you can set up AWR baseline metric thresholds for a particular baseline, you must compute the baseline statistics. Select Schedule Statistics Computation from the actions menu to compute the baseline statistics. There are several other actions available.
228
OK has change to Finish in Beta 5
Single AWR Baseline OK has change to Finish in Beta 5 Single AWR Baseline If you selected the Single option in the previous step, you access the page shown in this slide. Select the time period corresponding to your interest in one of two ways: Select the Snapshot Range option, and then set the Period Start Time and Period End Time by following the directions on the page. If the Icon that you want to select is not shown, you can change the chart time period. Specify the Time Range, with a date and time for start and end times. With Time Range, you can choose times in the future. When you are finished, click Finish to create the static baseline. Note: If the End Time of the baseline is in the future, a baseline template with the same name as the baseline will be created.
229
Creating a Repeating Baseline Template
You can define repeating baselines by using Enterprise Manager. In the wizard, after selecting Repeating in step 1, you can specify the repeat interval as shown in this slide. You specify the start time and the duration of the baseline. Then specify when the baseline statistics will be collected (daily or weekly; if weekly, for which days). Specify the range of dates for which this baseline template will collect statistics. Retention Time sets an expiration value for the baseline; a value of NULL indicates that the baseline never expires.
230
Generate a Baseline Template for a Single Time Period
Interesting time period T4 T5 T6 ….. Tx Ty Tz BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE ( start_time => to_date('21-JUN-2008','DD-MON-YYYY'), end_time => to_date('21-SEP-2008','DD-MON-YYYY'), baseline_name => 'FALL08', template_name => 'FALL08', expiration => NULL ) ; END; Generate a Baseline Template for a Single Time Period You can now create a template for how baselines are to be created for different time periods in the future, for predictable schedules. If any part of the period is in the future, use the CREATE_BASELINE_TEMPLATE procedure. For the baseline template, when the end time becomes a time in the past, a task using these inputs automatically creates a baseline for the specified time period when the time comes. The example creates a baseline template that creates a baseline when 0:0:0 21-Sep-2008 is in the past. Using time-based definitions in baseline creation does not require the start-snapshot and end-snapshot identifiers. For the CREATE_BASELINE_TEMPLATE procedure, you can also now specify an expiration duration for the baseline that is created from the template. The expiration duration, specified in days, represents the number of days you want the baselines to be maintained. A value of NULL means that the baselines never expire. To create a baseline over a period in the past, use the CREATE_BASELINE procedure (as in Oracle Database 10g). The CREATE_BASELINE procedure has one new parameter: expiration duration. Expiration duration has the same meaning as it does for CREATE_BASELINE_TEMPLATE.
231
Creating a Repeating Baseline Template
BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE ( day_of_week => 'SATURDAY', hour_in_day => 6, duration => 20, start_time => to_date('21-JUN-2007','DD-MON-YYYY'), end_time => to_date('21-JUN-2008','DD-MON-YYYY'), baseline_name_prefix => 'SAT_MAINT_WIN' template_name => 'SAT_MAINT_WIN', expiration => 90, dbid => NULL ); END; Creating a Repeating Baseline Template Use the CREATE_BASELINE_TEMPLATE procedure to generate baseline templates that automatically create baselines for a contiguous time period based on a repeating time schedule. You can also specify whether you want the baseline to be automatically removed after a specified expiration interval (expiration). The example in the slide generates a template that creates a baseline for a period that corresponds to each SATURDAY_MAINTENANCE_WINDOW for a year. The baseline is created over a 20-hour period (duration) that starts at 6:00 AM (hour_in_day) each Saturday (day_of_week). The baseline is named ‘SAT_MAINT_WIN’ with time information appended to make the name unique. The template is named ‘SAT_MAINT_WIN’, and each baseline will be kept for 90 days (expiration). This template is created for the local database ( dbid => NULL ). Use this baseline to compare the resources that are used each Saturday during the maintenance window.
232
DBMS_WORKLOAD_REPOSITORY Package
The following procedures have been added: CREATE_BASELINE_TEMPLATE RENAME_BASELINE MODIFY_BASELINE_WINDOW_SIZE DROP_BASELINE_TEMPLATE The following function has been added: SELECT_BASELINE_METRIC DBMS_WORKLOAD_REPOSITORY Package The slide shows the set of PL/SQL interfaces offered by Oracle Database 11g in the DBMS_WORKLOAD_REPOSITORY package for administration and filtering. MODIFY_BASELINE_WINDOW_SIZE enables you to modify the size of the SYSTEM_MOVING_WINDOW.
233
Baseline Views Data dictionary support for baselines:
Modified DBA_HIST_BASELINE New DBA_HIST_BASELINE_DETAILS New DBA_HIST_BASELINE_TEMPLATE Baseline Views The data dictionary views supporting the AWR baselines have changed. DBA_HIST_BASELINE: Modified View DBA_HIST_BASELINE has been modified to support the SYSTEM_MOVING_WINDOW baseline and the baselines generated from templates. Additional information includes the date created, time of last statistics calculation, and type of baseline. DBA_HIST_BASELINE_DETAILS: New View DBA_HIST_BASELINE_DETAILS displays information that allows you to determine the validity of a given baseline, such as whether there was a shutdown during the baseline period and the percentage of the baseline period that is covered by the snapshot data. DBA_HIST_BASELINE_TEMPLATE: New View DBA_HIST_BASELINE_TEMPLATE holds the baseline templates. This view provides the information needed by MMON to determine when a baseline will be created from a template and when the baseline should be removed. For details, see Oracle Database Reference 11g.
234
Thresholds
235
Performance Monitoring and Baselines
Performance alert thresholds are difficult to determine: Expected metric values vary by workload type. Expected metric values vary by system load. Baselines can capture metric value statistics: Automatically computed over the system moving window Manually computed over static baselines Performance Monitoring and Baselines When they are properly set, alert thresholds provide a valuable service—an alert—by indicating a performance metric that is at an unexpected value. Unfortunately, in many cases the expected value varies with the workload type, system load, time of day, or day of the week. Baselines associated with certain workload types or days of the week capture the metric values of that period. The baseline can then be used to set the threshold values when similar conditions exist. Baselines capture metric values. The statistics for baselines are computed to place a minimal load on the system; statistics for static baselines are manually computed. You can schedule statistics computation on the AWR Baselines page. Statistics for the system moving window are automatically computed according to the BSLN_MAINTAIN_STATS_SCHED schedule. By default, this schedule starts the job every week at noon on Saturday.
236
Performance Monitoring and Baselines
Baseline metric statistics determine alert thresholds: Unusual values versus baseline data = significance level thresholds Close or exceeding peak value over baseline data = percentage of maximum thresholds Performance Monitoring and Baselines (continued) Metric statistics computed over a baseline enable you to set thresholds that compare the baseline statistics to the current activity. There are three methods of comparison: significance level, percentage of maximum, and fixed values. Thresholds based on significance level use statistical relevance to determine which current values are unusual. In simple terms, if the significance level is set to .99 for a critical threshold, the threshold is set where 1% of the baseline values fall outside this value and any current values that exceed this value trigger an alert. A higher significance level of .999 or causes fewer alerts to be triggered. Thresholds based on percentage of maximum are calculated based on the maximum value captured by the baseline. Threshold values based on fixed values are set by the DBA. No baseline is required.
237
Defining Alert Thresholds Using Static Baseline
After AWR baseline statistics are computed for a particular baseline, you can set metric thresholds specific to your baseline. Compute baseline statistics directly from the Baselines page (as previously discussed). Then go to the AWR Baseline Metric Thresholds page and select the type of metrics that you want to set. When done, select a specific metric and click Edit Thresholds. On the corresponding Edit AWR Baseline Metric Thresholds page, specify your thresholds in the Thresholds Settings section, and then click Apply Thresholds. You can specify thresholds based on the statistics computed for your baseline. This is illustrated in the slide. In addition to “Significance Level,” the other possibilities are “Percentage of Maximum” and “Fixed Values.” Note: After a threshold is set using Baseline Metric Thresholds, the previous threshold values are forgotten forever and the statistics from the associated baseline are used to determine the threshold values until they are cleared (by using the Baseline Metric Threshold UI or PL/SQL interface).
238
Using EM to Quickly Configure Adaptive Thresholds
Oracle Database 11g Enterprise Manager provides significant usability improvements in the selection of adaptive thresholds for database performance metrics, with full integration with AWR baselines as the source for the metric statistics. EM offers a quick configuration option in a one-click starter set of thresholds based on OLTP or Data Warehouse workload profiles. Make the selection of the appropriate workload profiles from the subsequent pop-up window. By making this simple selection, the system automatically configures and evolves adaptive thresholds based on the SYSTEM_MOVING_WINDOW baseline for the group of metrics that best correspond to the chosen workload.
239
Using EM to Quickly Configure Adaptive Thresholds
Using EM to Quickly Configure Adaptive Thresholds (continued) On the OLTP Threshold settings page, configure the desired workload baselines. When it is configured, you can edit the threshold levels by using the Edit Threshold button. The Warning Level and Critical Level columns indicate the type of alert generated. The Significance Level indicates whether the level of observation is at or above a certain value. The following significance level thresholds are supported: High: Significant at 0.95 level (5 in 100) Very High: Significant at 0.99 level (1 in 100) Severe: Significant at level (1 in 1,000) Extreme: Significant at level (1 in 10,000) Tip: When editing threshold levels, set the significance level thresholds conservatively and experimentally at first, and then observe the number and significance of alerts. Higher significance levels reduce the number of alerts. The threshold values are determined by examining the statistics for the metric values observed over the baseline time period. The system sets the thresholds based on prior data from the system itself and some metadata provided by you. This is significantly easier in the multitarget case because you no longer need to know the system-specific metric. The statistics to monitor are the maximum value as well as the significance levels. The significance levels let you set the threshold to a value that is statistically significant at the stated level (for example, 1 in 1,000).
240
Changing Adaptive Threshold Settings
Threshold adapts automatically Observed value Baseline calculation Changing Adaptive Threshold Settings After the adaptive thresholds are set, you can change their values (if necessary) as shown in the slide. On the Edit AWR Baseline Metric Thresholds page corresponding to the metric that you want to modify, you see the graphic history of the observed value for the metric, the materialization of the computed baseline value, and the corresponding adaptive threshold.
241
Practice 7-1: Overview This practice covers creating AWR baselines:
Creating a baseline over a past time interval Applying the baseline to the performance page graphs Creating a repeating period baseline over periods in the future
242
Automated Maintenance Tasks
243
Maintenance Windows 10 p.m. – 2 a.m. Mon to Fri
6 a.m. – 2 a.m. Sat to Sun Maintenance Windows Oracle Database 10g introduced the execution of automated maintenance tasks during a maintenance window. The automated tasks are statistics collection, segment advisor, and Automatic SQL Tuning. With Oracle Database 11g, the Automated Maintenance Tasks feature relies on the Resource Manager being enabled during the maintenance windows. Thus the resource plan associated with the window is automatically enabled when the window opens. The goal is to prevent maintenance work from consuming excessive amounts of system resources. Each maintenance window is associated with a resource plan that specifies how the resources will be allocated during the window duration. In Oracle Database 11g, WEEKNIGHT_WINDOW and WEEKEND_WINDOW (defined in Oracle Database 10g) are replaced with daily maintenance windows. Automated tasks are assigned to specific windows. All daily windows belong to MAINTENANCE_WINDOW_GROUP by default. You are still completely free to define other maintenance windows as well as change start times and durations for the daily maintenance windows. Likewise, any maintenance windows that are deemed unnecessary can be disabled or removed. The operations can be done by using EM or Scheduler interfaces.
244
Default Maintenance Plan
SQL> SELECT name FROM V$RSRC_PLAN 2 WHERE is_top_plan = 'TRUE'; NAME DEFAULT_MAINTENANCE_PLAN Default Maintenance Plan When a maintenance window opens, the DEFAULT_MAINTENANCE_PLAN in the Resource Manager is automatically set to control the amount of CPU used by the automated maintenance tasks. To be able to give different priorities to each possible task during a maintenance window, various consumer groups are assigned to DEFAULT_MAINTENANCE_PLAN. The hierarchy between groups and plans is shown in the slide. For high-priority tasks: Optimizer Statistics Gathering automatic task is assigned to the ORA$AUTOTASK_STATS_GROUP consumer group Segment Advisor automatic task is assigned to the ORA$AUTOTASK_SPACE_GROUP consumer group Automatic SQL Tuning automatic task is assigned to the ORA$AUTOTASK_SQL_GROUP consumer group Note: If necessary, you can manually change the percentage of CPU resources allocated to the various automated maintenance task consumer groups inside ORA$AUTOTASK_HIGH_SUB_PLAN.
245
Automated Maintenance Task Priorities
… Run Job1 with urgent priority Run Job2 with urgent priority Run Job3 with high priority Run Job3 with medium priority Run Job4 with medium priority … MMON urgent Stats high Maintenance window medium ABP Space Job1 … Jobn SQL Automated Maintenance Task Priorities The Automated Maintenance Tasks feature is implemented by Autotask Background Process (ABP). ABP functions as an intermediary between automated tasks and the Scheduler. Its main purpose is to translate automated tasks into AUTOTASK jobs for execution by the Scheduler. Just as important, ABP maintains a history of execution of all tasks. ABP stores its private repository in the SYSAUX tablespace; you view the repository through DBA_AUTOTASK_TASK. ABP is started by MMON at the start of a maintenance window. There is only one ABP required for all instances. The MMON process monitors ABP and restarts it if necessary. ABP determines the list of jobs that need to be created for each maintenance task. This list is ordered by priority: urgent, high, and medium. Within each priority group, jobs are arranged in the preferred order of execution. ABP creates jobs in the following manner: all urgent-priority jobs are created first, all high-priority jobs are created next, and all medium-priority jobs are created last. ABP assigns jobs to various Scheduler job classes. These job classes map the job to a consumer group based on priority. Note: With Oracle Database 11g, there is no job that is permanently associated with a specific automated task. Therefore, it is not possible to use DBMS_SCHEDULER procedures to control the behavior of automated tasks. Use the DBMS_AUTO_TASK_ADMIN procedures instead. DBA_AUTOTASK_TASK
246
Controlling Automatic Maintenance Tasks
The Automatic Maintenance Tasks feature determines when—and in what order—tasks are performed. As a DBA, you can control the following: If the maintenance window turns out to be inadequate for the maintenance workload, adjust the duration and start time of the maintenance window. Control the resource plan that allocates resources to the automated maintenance tasks during each window. Enable or disable individual tasks in some or all maintenance windows. In a RAC environment, shift maintenance work to one or more instances by mapping maintenance work to a service. Enabling the service on a subset of instances shifts maintenance work to these instances. As shown in the slide, Enterprise Manager is the preferred way to control Automatic Maintenance Tasks. However, you can also use the DBMS_AUTO_TASK_ADMIN package.
247
Important I/O Metrics for Oracle Databases
Disk bandwidth Channel bandwidth Metric = IOPS and latency Metric = MBPS Need large I/O channel Need high RPM and fast seek time OLTP (Small random I/O) DW/OLAP (Large sequential I/O) Important I/O Metrics for Oracle Databases We first briefly summarize the types of I/O issued by the Oracle Database processes. The database I/O workload typically consists of small random I/O and large sequential I/O. Small random I/O is more prevalent in an OLTP application environment in which each foreground reads a data block into the buffer cache for updates and the changed blocks are written in batches by the dbwr process. Large sequential I/O is common in an OLAP application environment. The OLTP application performance depends on how fast a small I/O is serviced, which depends on how fast the disk can spin and find the data. Large I/O performance depends on the capacity of the I/O channel that connects the server to the storage array; large I/O throughput is better when the capacity of the channel is larger. IOPS (I/O per second): This metric represents the number of small random I/O that can be serviced in a second. The IOPS rate mainly depends on how fast the disk media can spin. The IOPS rate from a storage array can be increased either by adding more disk drives or by using disk drives with a higher RPM (rotations per minute) rate. MBPS (megabytes per second): The rate at which data can be transferred between the computing server node and the storage array depends on the capacity of the I/O channel that is used to transfer data. More data can be transferred through a wider pipe.
248
Important I/O Metrics for Oracle Databases (continued)
The throughput of a streaming data application depends on how fast this data can be transferred. Throughput is measured by using the MBPS metric. Even though the disks themselves have an upper limit on the amount of sequential data that they can transfer, it is often channel capacity that limits the overall throughput of the system. For example, a host connected to an NAS server through a GigE switch is limited by a transfer capacity of 128 MBPS. There are usually multiple disks in the NAS that can together provide more than 128 MBPS. The channel resource becomes the limiting resource. I/O latency: Latency is another important metric that is used in measuring the performance of an I/O subsystem. Traditionally, latency is simply the time for the disk to access a particular sector on the disk. But from the database point of view, latency represents all the time it takes for a submitted I/O request to be serviced by the storage. In other words, it represents the overhead before the first byte of a transfer arrives after an I/O request has been submitted. Latency values are more commonly used for small random I/O when tuning a system. If there are too many I/O requests queued up against a disk, the latency increases as the wait time in the queue increases. To improve the latency of I/O requests, data is usually striped across multiple spindles so that all I/O requests to a file do not go to the same disk. A higher latency usually indicates an overloaded system. Other than the main resources mentioned above, there are also other storage array components that can affect I/O performance. Array vendors provide caching mechanisms to improve read throughput, but their real benefit is questionable in a database environment because Oracle Database uses caches and read-ahead mechanisms so that data is available directly from RAM rather than from the disks. Notes only slide
249
I/O Calibration and Enterprise Manager
To determine the previously discussed I/O metrics, you can use the I/O Calibration tool from the Enterprise Manager Performance page or PL/SQL. I/O Calibration is a modified version of the ORION tool and is based on the asynchronous I/O library. Because calibration requires issuing enough I/O to saturate the storage system, any performance-critical sessions will be negatively impacted. Thus, you should run I/O calibration only when there is little activity on your system. I/O Calibration takes approximately 10 minutes to run. You can launch an I/O Calibration task directly from Enterprise Manager (as shown in the slide). You do this by clicking the Performance tab. On the Performance page, you can click the I/O tab and then the I/O Calibration button. On the I/O Calibration page, specify the approximate number of physical disks attached to your database storage system as well as the maximum tolerable latency for a single-block I/O request. Then, in the Schedule section of the I/O Calibration page, specify when to execute the calibration operation. Click the Submit button to create a corresponding Scheduler job. On the Scheduler Jobs page, you can see the amount of time it takes for the calibration task to run. When finished, view the results of the calibration operation on the I/O Calibration page: maximum I/O per second, maximum megabytes per second, and average actual latency metrics.
250
I/O Calibration and the PL/SQL Interface
SQL> exec dbms_resource_manager.calibrate_io( num_disks=>1, max_latency=>10, max_iops=>:max_iops, max_mbps=>:max_mbps, actual_latency=>:actual_latency); PL/SQL procedure successfully completed. SQL> SELECT max_iops, max_mbps, max_pmbps, latency 2 FROM DBA_RSRC_IO_CALIBRATE; MAX_IOPS MAX_MBPS MAX_PMBPS LATENCY I/O Calibration and the PL/SQL Interface You can run the I/O Calibration task by using the PL/SQL interface. Execute the CALIBRATE_IO procedure from the DBMS_RESOURCE_MANAGER package. This procedure calibrates the I/O capabilities of the storage. The calibration status and results are available from the V$IO_CALIBRATION_STATUS and DBA_RSRC_IO_CALIBRATE views. Here is a brief description of the parameters you can specify for the CALIBRATE_IO procedure: num_disks: Approximate number of physical disks in the database storage max_latency: Maximum tolerable latency (in milliseconds) for database-block-sized I/O requests max_iops: Maximum number of I/O requests per second that can be sustained. The I/O requests are randomly distributed, database-block-sized reads. max_bps: Maximum throughput of I/O that can be sustained (in megabytes per second). The I/O requests are randomly distributed 1 MB reads. actual_latency: Average latency of database-block-sized I/O requests at max_iops rate (in milliseconds) Usage notes: Only users with the SYSDBA privilege can run this procedure. Only one calibration can run at a time. If another calibration is initiated at the same time, it fails. For a RAC database, the workload is simultaneously generated from all instances. The latency time is computed only when the TIMED_STATISTICS initialization parameter is set to TRUE (which is set when STATISTICS_LEVEL is set to TYPICAL or ALL).
251
I/O Statistics: Overview
V$IOSTAT_FUNCTION AWR and EM . V$IOSTAT_FILE V$IOSTAT_CONSUMER_GROUP I/O Statistics: Overview To give a consistent set of statistics for all I/O issued from an Oracle instance, Oracle Database 11g introduces a set of virtual views that collect I/O statistics in three dimensions: RDBMS components: Components are grouped by their functionality into 12 categories (shown in the slide). When Resource Management is enabled, I/O statistics are collected for all consumer groups that are part of the currently enabled resource plan. I/O statistics are also collected for the individual files that have been opened. Each dimension has statistics for read and write operations. Since read/write can occur in single block or multiblock operations, they are separated into four different operations (as shown in the slide). For each operation type, the number of corresponding requests and the amount of megabytes are cumulated. In addition to these, the total I/O wait time in milliseconds and the number of total waits are also cumulated for both component and consumer group statistics. For file statistics, total service time in microseconds is accumulated, in addition to statistics for single block reads. Virtual views show cumulative values for statistics. Component and consumer group statistics are transformed into AWR metrics that are sampled regularly and stored in the AWR repository. You can retrieve those metrics across a timeline directly on the Performance page of Enterprise Manager. Note: V$IOSTAT_NETWORK collects network I/O statistics that are related to accessing files on a remote database instance.
252
I/O Statistics and Enterprise Manager
You can retrieve I/O statistics directly on the Performance page in Enterprise Manager. On the Performance page, click the I/O subtab located under the Average Active Session graph. On the I/O subpage, you see a breakdown of I/O statistics on three possible dimensions: I/O Function, I/O Type, and Consumer Group. Select one of the options to look at the corresponding dimension graphs. The slide shows the two graphs corresponding to the I/O function dimension: I/O Megabytes per Second (by RDBMS component) I/O Requests per Second (by RDBMS component) Note: The “Others” RDBMS component category corresponds to everything that is not directly issued from SQL (for example, PL/SQL and Java).
253
I/O Statistics and Enterprise Manager
I/O Statistics and Enterprise Manager (continued) From the I/O Function statistic graphs, you can drill down to a specific component by clicking that component. In the example in the slide, you drill down to the Buffer Cache Reads component. This takes you to the “I/O Rates by I/O Function” page, where you can see three important graphs for that particular component: MBPS, IOPS, and wait time.
254
Practices 7-2 and 7-3: Overview
These practices cover the following topics: Monitoring the performance of AUTOTASK jobs Adjusting the resources and windows for AUTOTASK jobs Calibrating I/O resources
255
Resource Manager: New EM Interface
Using Enterprise Manager, you can access the Resource Manager section from the Server page. The Resource Manager section is organized in the same way as the Database Resource Manager. Clicking the Getting Started link takes you to the “Getting Started with Database Resource Manager” page, which provides a brief description of each step as well as the links to the corresponding pages.
256
Resource Plans Created by Default
When you create an Oracle 11g database, the Resource Plans (shown in the slide) are created by default. However, none of these plans are active by default.
257
Default Plan Default Plan
The slide shows the properties of DEFAULT_PLAN. Note that there are no limits for its thresholds. As you can see, Oracle Database 11g introduces two new I/O limits that you can define as thresholds in a Resource Plan.
258
I/O Resource Limit Thresholds
When you create a Resource Plan directive, you can specify the I/O resource limits. The example in the slide shows how to do this in both Enterprise Manager and PL/SQL. You can specify the following two arguments: switch_io_megabytes: Specifies the amount of I/O (in MB) that a session can issue before an action is taken. Default is NULL, which means unlimited. switch_io_reqs: Specifies the number of I/O requests that a session can issue before an action is taken. Default is NULL, which means unlimited.
259
Resource Manager Statistics
You can also look at the Resource Manager Statistics page, which displays statistics for only the current active plan.
260
New Scheduler Feature: Lightweight Jobs
Persistent lightweight jobs: Created from a job template Recoverable JOB_STYLE => LIGHTWEIGHT New Scheduler Feature: Lightweight Jobs Some customers need to create hundreds of jobs each second. With regular jobs, each job creates a database object describing the job, modifying several tables, and creating redo in the process. In the Oracle Database 11g Scheduler, there is a persistent lightweight job. The goal of a lightweight job is to reduce the overhead and the time required to start a job. A minimal amount of metadata is created for the job. This reduces the time required and the redo created when the job starts. To achieve these goals, the lightweight job has a small footprint on disk for the job metadata and for storing run-time data. The footprint on disk also makes recovery and load balancing possible in RAC environments. The lightweight job is always created from a job template, which can be a stored procedure or a program. The stored procedure holds all the information needed for the job. A few job attributes may be set (such as job arguments). Job templates are created by using the DBMS_SCHEDULER.CREATE_PROGRAM procedures. Oracle Database 11g continues to support the database object-based jobs that have existed since Oracle Scheduler was first introduced in Oracle 10g. Lightweight jobs are not intended to replace these jobs because each job type has its own advantages and gives you the flexibility to choose a job based on your needs.
261
Choosing the Right Job Regular job Persistent lightweight job
Highest overhead Best recovery Most flexible Persistent lightweight job Less overhead Some recovery Limited change to attributes Choosing the Right Job The advantages and disadvantages of the types of jobs are as follows: A regular job offers maximum flexibility but does entail a significant overhead in create/drop performance. A job can be created with a single command. Users have fine-grained control of privileges on the job and can also use programs or stored procedures owned by other users. A regular job requires the job database object to be created and dropped. This action updates several tables and the associated redo. Users who are creating a relatively small number of jobs that run relatively infrequently should choose regular jobs. A persistent lightweight job has a significant improvement in create/drop time because it does not have the overhead of creating a database object. Every lightweight job is created from a job template, which is stored as a program. Because persistent lightweight jobs write state information to disk at run time, only a small improvement is expected in execution. There are several limitations to persistent lightweight jobs: Users cannot set privileges on these jobs. Instead, they inherit their privileges from the parent job template. The use of a template is mandatory. It is not possible to create a fully self-contained persistent lightweight job. Only certain job attributes are available to be set, such as JOB_ARGUMENTS. Lightweight jobs are most useful when the user needs to create a large number of jobs in a very short time (from 10 to 100 jobs a second) and has a library of programs (job templates) available for use.
262
Creating a Single Lightweight Job
Specify time and frequency Use predefined schedule DBMS_SCHEDULER.CREATE_JOB ( job_name => 'my_lightweight_job1', program_name => 'MY_PROG', repeat_interval=> 'FREQ=DAILY;BY_HOUR=9', end_time => '30-APR AM CST', job_style => 'LIGHTWEIGHT'); DBMS_SCHEDULER.CREATE_JOB ( job_name => 'my_lightweight_job2', program_name => 'MY_PROG', schedule_name => 'MY_SCHED', job_style => 'LIGHTWEIGHT'); Creating a Single Lightweight Job The DBMS_SCHEDULER package has overloaded a few packages to allow the lightweight jobs to be created with a minimum of new syntax. Lightweight jobs have very few parameters that can be specified: job arguments and schedule. The rest of the metadata for the job is inherited from the job template, including privileges. The template for a lightweight job must be a PL/SQL block or a stored procedure. Lightweight jobs must be created in PL/SQL by using the DBMS_SCHEDULER package. The JOB_STYLE argument is not exposed in EM. In the first example in the slide, MY_PROG is the job template, and the schedule is created as part of the CREATE_JOB method. In the second example, the schedule is applied from a named schedule. An example of a simple template is the following: BEGIN DBMS_SCHEDULER.CREATE_PROGRAM( program_name=>'"SYSTEM"."MY_PROG"', program_action=> 'DECLARE time_now DATE; SELECT SYSDATE INTO time_now FROM DUAL; END;', program_type=>'PLSQL_BLOCK', enabled=>TRUE); END;
263
Creating an Array of Lightweight Jobs
1. Declare variables of types sys.job and sys.job_array. 2. Initialize the job array. 3. Size the job array to hold the number of jobs needed. DECLARE newjob sys.job; newjobarr sys.job_array; BEGIN newjobarr := SYS.JOB_ARRAY(); Creating an Array of Lightweight Jobs Using a job array is a more efficient way to create a set of jobs. This also applies to lightweight jobs. The job array type and the CREATE_JOBS procedure are new to the DBMS_SCEHDULER package in Oracle Database 11g. In the example in the slide, 100 job specifications are created in a job array and submitted to the job queue in a single transaction. Notice that, for a lightweight job, there is a very limited amount of information needed. In this example, the start_time parameter defaults to NULL, so the job is scheduled to start immediately. The example continues on the next two pages. 1. Declare the variable to hold a job definition and a job array variable. 2. Initialize the job array by using the SYS.JOB_ARRAY constructor. This creates a place for one job in the array. 3. Set the size of the array to the number of expected jobs. Note: If the array is very small, performance is not significantly better than submitting a single job. newjobarr.EXTEND(100);
264
Creating an Array of Lightweight Jobs
4. Place jobs in the job array. 5. Submit the job array as one transaction. FOR i IN LOOP newjob := SYS.JOB(job_name => 'LWTJK'||to_char(i), job_style => 'LIGHTWEIGHT', job_template => 'MY_PROG', enabled => TRUE ); newjobarr(i) := newjob; END LOOP; DBMS_SCHEDULER.CREATE_JOBS(newjobarr, 'TRANSACTIONAL'); Creating an Array of Lightweight Jobs (continued) 4. Create each job and place it in the array. In this example, the only difference is the name of the job. The start_time variable of the job is omitted and defaults to NULL, indicating that the job will run immediately. 5. Use the CREATE_JOBS procedure to submit all jobs in the array as one transaction. The full code of this example is as follows: DECLARE newjob sys.job; newjobarr sys.job_array; BEGIN -- Create an array of JOB object types newjobarr := sys.job_array(); -- Allocate sufficient space in the array newjobarr.extend(100); -- Add definitions for jobs FOR i IN LOOP -- Create a JOB object type newjob := sys.job(job_name => 'LWTJK' || to_char(i), job_style => 'LIGHTWEIGHT', job_template => 'PROG_1', enabled => TRUE );
265
Notes only page Creating an Array of Lightweight Jobs (continued)
-- Add job to the array newjobarr(i) := newjob; END LOOP; -- Call CREATE_JOBS to create jobs in one transaction DBMS_SCHEDULER.CREATE_JOBS(newjobarr, 'TRANSACTIONAL'); END; / Notes only page
266
Viewing Lightweight Jobs in the Dictionary
View lightweight jobs in *_SCHEDULER_JOBS: View job arguments with *_SCHEDULER_JOB_ARGS: SQL> SELECT job_name, job_style, program_name 2 FROM USER_SCHEDULER_JOBS; JOB_NAME JOB_STYLE PROGRAM_NAME LWTJX LIGHTWEIGHT PROG_3 SQL> select job_name, argument_name, argument_type, value 2 FROM USER_SCHEDULER_JOB_ARGS; JOB_NAME ARGUMENT_NAME ARGUMENT_TYPE VALUE LWTJX ARG VARCHAR TEST_VALUE Viewing Lightweight Jobs in the Dictionary In Oracle Database 11g, changes to dictionary views to support lightweight jobs are minimal. No new views are added. Lightweight jobs are visible through the same views as are regular jobs: DBA_SCHEDULER_JOBS, ALL_SCHEDULER_JOBS, and USER_SCHEDULER_JOBS Arguments to lightweight jobs are visible through the same views as are those of regular jobs: DBA_SCHEDULER_JOB_ARGS, ALL_SCHEDULER_JOB_ARGS, and USER_SCHEDULER_JOB_ARGS Because lightweight jobs are not database objects, they are not visible through the DBA_OBJECTS, ALL_OBJECTS, and USER_OBJECTS views.
267
Practice 7-4 Overview: This practice covers the creation of lightweight Scheduler jobs.
268
Summary In this lesson, you should have learned how to:
Create AWR baselines for future time periods Identify the views that capture foreground statistics Control automated maintenance tasks Use Resource Manager I/O calibration Use lightweight jobs with the Scheduler
269
Performance Enhancements
270
Objectives After completing this lesson, you should be able to:
Use the new features of ADDM Use Automatic Memory Management Use statistics enhancements
271
ADDM Enhancements in Oracle Database 11g
ADDM for RAC Directives (finding suppression) DBMS_ADDM package
272
Oracle Database 11g: Automatic Database Diagnostic Monitor for RAC
Database ADDM Self-diagnostic engine Instance ADDM AWR … Oracle Database 11g: Automatic Database Diagnostic Monitor for RAC Oracle Database 11g offers an extension to the set of functionality that increases the database’s manageability by offering clusterwide analysis of performance. A special mode of Automatic Database Diagnostic Monitor (ADDM) analyzes an Oracle Real Application Clusters (RAC) database cluster and reports on issues that are affecting the entire cluster as well as on those that are affecting individual instances. This mode is called database ADDM as opposed to instance ADDM, which already existed with Oracle Database 10g. Database ADDM for RAC is not just a report of reports but has independent analysis that is appropriate for RAC. Inst1 InstN
273
Automatic Database Diagnostic Monitor for RAC
Identifies the most critical performance problems for the entire RAC cluster database Runs automatically when taking AWR snapshots (the default) Performs database-wide analysis of: Global resources (for example I/O and global locks) High-load SQL and hot blocks Global cache interconnect traffic Network latency issues Skew in instance response times Is used by DBAs to analyze cluster performance Automatic Database Diagnostic Monitor for RAC In Oracle Database 11g, you can create a period analysis mode for ADDM that analyzes the throughput performance for an entire cluster. When the advisor runs in this mode, it is called database ADDM. You can run the advisor for a single instance, which is equivalent to Oracle Database 10g ADDM and is now called instance ADDM. Instance ADDM has access to AWR data generated by all instances, thereby making the analysis of global resources more accurate. Both database and instance ADDM run on continuous time periods that can contain instance startup and shutdown. In the case of database ADDM, there may be several instances that are shut down or started during the analysis period. You must maintain the same database version throughout the entire time period, however. Database ADDM runs automatically after each snapshot is taken. The automatic instance ADDM runs are the same as in Oracle Database 10g. You can also perform analysis on a subset of instances in the cluster. This is called partial analysis ADDM. I/O capacity finding (the I/O system is overused) is a global finding because it concerns a global resource affecting multiple instances. A local finding concerns a local resource or issue that affects a single instance. For example, a CPU-bound instance results in a local finding about the CPU. Although ADDM can be used during application development to test changes to either the application, the database system, or the hosting machines, database ADDM is targeted at DBAs.
274
EM Support for ADDM for RAC
Cluster Database home page: EM Support for ADDM for RAC Oracle Database 11g Enterprise Manager displays the ADDM analysis on the Cluster Database home page. The Findings table is displayed in the ADDM Performance Analysis section. For each finding, the Affected Instances column displays the number (m of n) of instances affected. The display also indicates the percentage impact for each instance. Drilling down further on the findings takes you to the Performance Findings Detail page.
275
EM Support for ADDM for RAC
Finding History page: EM Support for ADDM for RAC (continued) On the Performance Finding Details page, click the Finding History button to see a page with a chart on the top plotting the impact in active sessions for the findings over time. The default display period is 24 hours. The drop-down list supports viewing for seven days. At the bottom of the display, a table similar to the results section is shown, displaying all the findings for this named finding. On this page, you set filters on the findings results. Different types of findings (CPU, logins, SQL, and so on) have different kinds of criteria for filtering. Note: Only automatic runs of ADDM are considered for the Finding History. These results reflect the unfiltered results only.
276
Using the DBMS_ADDM Package
A database ADDM task is created and executed: GET_REPORT procedure for seeing the result: SQL> var tname varchar2(60); SQL> BEGIN SQL> :tname := 'my database ADDM task'; SQL> dbms_addm.analyze_db(:tname, 1, 2); SQL> END; SQL> SELECT dbms_addm.get_report(:tname) FROM DUAL; Using the DBMS_ADDM Package The DBMS_ADDM package eases the ADDM management. It consists of the following procedures and functions: ANALYZE_DB: Creates an ADDM task for analyzing the database globally ANALYZE_INST: Creates an ADDM task for analyzing a local instance ANALYZE_PARTIAL: Creates an ADDM task for analyzing a subset of instances DELETE: Deletes a created ADDM task (of any kind) GET_REPORT: Gets the default text report of an executed ADDM task Parameters 1,2: Start and end snapshot
277
Advisor Named Findings and Directives
Advisor results are now classified and named: Exist in the DBA{USER}_ADVISOR_FINDINGS view You can query all finding names from the DBA_ADVISOR_FINDING_NAMES view: SQL> select finding_name from dba_advisor_finding_names; FINDING_NAME Top Segments by I/O Top SQL by "Cluster" Wait . . . Undersized Redo Log Buffer Undersized SGA Undersized Shared Pool Undersized Streams Pool Advisor Named Findings and Directives Oracle Database 10g introduced the advisor framework and various advisors to help DBAs manage databases efficiently. These advisors provide feedback in the form of findings. Oracle Database 11g now classifies these findings so that you can query the advisor views to understand how often a given type of finding is recurring in the database. A FINDING_NAME column has been added to the following advisor views: DBA_ADVISOR_FINDINGS USER_ADVISOR_FINDINGS A new DBA_ADVISOR_FINDING_NAMES view displays all the finding names.
278
Using the DBMS_ADDM Package
Create an ADDM directive that filters “Undersized SGA” findings: Possible findings in DBA_ADVISOR_FINDING_NAMES SQL> var tname varchar2(60); SQL> BEGIN 2 dbms_addm.insert_finding_directive (NULL, 'My undersized SGA directive', 'Undersized SGA', , ); 7 :tname := 'my instance ADDM task'; 8 dbms_addm.analyze_inst(:tname, 1, 2); 9 END; 10 / SQL> SELECT dbms_addm.get_report(:tname) from dual; Using the DBMS_ADDM Package You can use possible finding names to query the findings repository to get all occurrences of that specific finding. In the slide, you see the creation of an instance ADDM task with a finding directive. When the task name is NULL, it applies to all subsequent ADDM tasks. The finding name (“Undersized SGA”) must exist in the DBA_ADVISOR_FINDING_NAMES view (which lists all the findings) and is case-sensitive. The result of DBMS_ADDM.GET_REPORT shows the “Undersized SGA” finding only if the finding is responsible for at least two (min_active_sessions) average active sessions during the analysis period. This is at least 10% (min_perc_impact) of the total database time during that period.
279
Using the DBMS_ADDM Package
Procedures to add directives: INSERT_FINDING_DIRECTIVE INSERT_SQL_DIRECTIVE INSERT_SEGMENT_DIRECTIVE INSERT_PARAMETER_DIRECTIVE Procedures to delete directives: DELETE_FINDING_DIRECTIVE DELETE_SQL_DIRECTIVE DELETE_SEGMENT_DIRECTIVE DELETE_PARAMETER_DIRECTIVE Using the DBMS_ADDM Package (continued) Additional PL/SQL directive procedures: INSERT_FINDING_DIRECTIVE: Creates a directive to limit reporting of a specific finding type INSERT_SQL_DIRECTIVE: Creates a directive to limit reporting of actions on specific SQL INSERT_SEGMENT_DIRECTIVE: Creates a directive to prevent ADDM from creating actions to “run Segment Advisor” for specific segments INSERT_PARAMETER_DIRECTIVE: Creates a directive to prevent ADDM from creating actions to alter the value of a specific system parameter Long syntax for parameters would again help here. Directives are reported if you specify ALL. Note: For a complete description of the available procedures, see the Oracle Database 11g PL/SQL References and Types documentation.
280
Modified Advisor Views
New column Description FILTERED ‘Y’ means that the row in the view was filtered out by a directive (or a combination of directives). ‘N’ means that the row was not filtered. Found in: DBA_ADVISOR_FINDINGS USER_ADVISOR_FINDINGS DBA_ADVISOR_RECOMMENDATIONS USER_ADVISOR_RECOMMENDATIONS DBA_ADVISOR_ACTIONS USER_ADVISOR_ACTIONS Modified Advisor Views The views containing advisor findings, recommendations, and actions have been enhanced by adding the FILTERED column.
281
New ADDM Views DBA{USER}_ADDM_TASKS: Displays every executed ADDM task; extensions of the corresponding advisor views DBA{USER}_ADDM_INSTANCES: Displays instance-level information for ADDM tasks that completed DBA{USER}_ADDM_FINDINGS: Extensions of the corresponding advisor views DBA{USER}_ADDM_FDG_BREAKDOWN: Displays the contribution for each finding from the different instances for database and partial ADDM New ADDM Views For a complete description of the available procedures, see the Oracle Database 11g documentation set.
282
Oracle Database 10g SGA Parameters
With ASMM, five important SGA components can be automatically tuned. Special buffer pools are not auto-tuned. Log buffer is a static component but has a good default. Manual dynamic parameters Auto-tuned parameters Manual static parameters SHARED_POOL_SIZE DB_CACHE_SIZE LARGE_POOL_SIZE JAVA_POOL_SIZE STREAMS_POOL_SIZE DB_KEEP_CACHE_SIZE LOG_BUFFER DB_RECYCLE_CACHE_SIZE SGA_MAX_SIZE DB_nK_CACHE_SIZE Oracle Database 10g SGA Parameters As shown in the slide, the five most important pools are automatically tuned when Automatic Shared Memory Management (ASMM) is activated. These parameters are called auto-tuned parameters. The second category, called manual dynamic parameters, comprises parameters that can be manually resized without having to shut down the instance but are not automatically tuned by the system. The last category, manual static parameters, includes the parameters that are fixed in size and cannot be resized without first shutting down the instance. SGA_TARGET
283
Oracle Database 10g PGA Parameters
PGA_AGGREGATE_TARGET: Specifies the target aggregate amount of PGA memory available to the instance Can be dynamically modified at the instance level Examples: 100,000 KB, 2,500 MB, 50 GB Default: 10 MB or 20% of SGA size (whichever is greater) WORKAREA_SIZE_POLICY: Optional Can be dynamically modified at the instance or session level Enables fallback to static SQL memory management for a particular session Oracle Database 10g PGA Parameters PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory that is available to all server processes attached to the instance. Setting PGA_AGGREGATE_TARGET to a nonzero value automatically sets the WORKAREA_SIZE_POLICY parameter to AUTO. This means that the SQL working areas used by memory-intensive SQL operators are automatically sized. A nonzero value for this parameter is the default because, unless you specify otherwise, Oracle sets it to 20% of the SGA or 10 MB, whichever is greater. Setting PGA_AGGREGATE_TARGET to 0 automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL. This means that the SQL work areas are sized using the *_AREA_SIZE parameters. Keep in mind that PGA_AGGREGATE_TARGET is not a fixed value. It is used to help the system better manage PGA memory, but the system will exceed this setting if necessary. WORK_AREA_SIZE_POLICY can be altered per database session, allowing manual memory management on a per-session basis if needed. For example, suppose that a session loads a large import file and a rather large sort_area_size is needed. A logon trigger could be used to set WORK_AREA_SIZE_POLICY for the account doing the import. If WORK_AREA_SIZE_POLICY is AUTO and PGA_AGGREGATE_TARGET is set to 0, we throw an external error ORA at startup.
284
Notes only page Oracle Database 10g PGA Parameters (continued)
Note: Until Oracle 9i Database, Release 2, PGA_AGGREGATE_TARGET controlled the sizing of work areas for all dedicated server connections. But it had no effect on the shared server connections, and the *_AREA_SIZE parameters took precedence in this case. In Oracle Database 10g, PGA_AGGREGATE_TARGET controls work areas allocated by dedicated and shared connections. Notes only page
285
Oracle Database 10g Memory Advisors
Buffer Cache Advice (introduced in 9i R1): V$DB_CACHE_ADVICE Predicts physical read times for different cache sizes Shared Pool Advice (in 9i R2): V$SHARED_POOL_ADVICE Predicts parse times for different sizes of shared pool Java Pool Advice (in 9i R2): V$JAVA_POOL_ADVICE Predicts Java class load time for Java pool sizes Streams Pool Advice (10g R2) V$STREAMS_POOL_ADVICE Predicts spill and unspill activity time for various sizes Oracle Database 10g Memory Advisors To help you size the most important SGA components, the advisors in the slide have been introduced in Oracle Database 11g: V$DB_CACHE_ADVICE contains rows that predict the number of physical reads and time for the cache size corresponding to each row. V$SHARED_POOL_ADVICE displays information about the estimated parse time in the shared pool for different pool sizes. V$JAVA_POOL_ADVICE displays information about the estimated class load time into the Java pool for different pool sizes. V$STREAMS_POOL_ADVICE displays information about the estimated count of spilled or unspilled messages, and the associated time spent in the spill or unspill activity for different streams pool sizes. Note: For more information about these views, see the Oracle Database Reference.
286
Oracle Database 10g Memory Advisors
SGA Target Advice (introduced in 10g R2): V$SGA_TARGET_ADVICE view Estimates the DB time for different SGA target sizes based on current size PGA Target Advice (introduced in 9i R1): V$PGA_TARGET_ADVICE view Predicts the PGA cache hit ratio for different PGA sizes ESTD_TIME time column added in 11g R1 For all advisors, STATISTICS_LEVEL must be set to at least TYPICAL. Oracle Database 10g Memory Advisors (continued) In Oracle Database 10g, the SGA Advisor shows the improvement in DB time that can be achieved for a particular setting of the total SGA size. This advisor allows you to reduce trial and error in setting the SGA size. The advisor data is stored in the V$SGA_TARGET_ADVICE view. V$PGA_TARGET_ADVICE predicts how the PGA cache hit percentage displayed by the V$PGASTAT performance view is impacted when the value of the PGA_AGGREGATE_TARGET parameter is changed. The prediction is performed for various values of the PGA_AGGREGATE_TARGET parameter, selected around its current value. The advice statistic is generated by simulating the past workload run by the instance. In 11g, a new column (ESTD_TIME) is added, corresponding to the CPU and I/O time required to process the bytes.
287
Automatic Memory Management: Overview
10g&11g 11g Untunable PGA Untunable PGA Memory target Untunable PGA Free PGA memory Free PGA target PGA target SQL areas SQL areas SQL areas Buffer cache SGA target SGA target Buffer cache Buffer cache Large pool Large pool Large pool SGA memory Shared pool Shared pool Shared pool Java pool Java pool Java pool Streams pool Streams pool Streams pool Automatic Memory Management: Overview With Automatic Memory Management, the system causes an indirect transfer of memory from SGA to PGA (and vice versa). It automates the sizing of PGA and SGA according to your workload. This indirect memory transfer relies on the OS mechanism of freeing shared memory. After memory is released to the OS, the other components can allocate memory by requesting memory from the OS. Currently, this is implemented on Linux, Solaris, HP-UX, AIX, and Windows. Set your memory target for the database instance and the system then tunes to the target memory size, redistributing memory as needed between the system global area (SGA) and the aggregate program global area (PGA). The slide displays the differences between the Oracle Database 10g mechanism and the new Automatic Memory Management with Oracle Database 11g. Other SGA Other SGA Other SGA OLTP BATCH BATCH
288
Automatic Memory Management: Overview
350 MB 350 MB Memory max target Memory max target 300 MB Memory target 250 MB Memory target ALTER SYSTEM SET MEMORY_TARGET=300M; Automatic Memory Management: Overview (continued) The simplest way to manage memory is to allow the database to automatically manage and tune it for you. To do so (on most platforms), you only have to set a target memory size initialization parameter (MEMORY_TARGET) and a maximum memory size initialization parameter (MEMORY_MAX_TARGET). Because the target memory initialization parameter is dynamic, you can change the target memory size at any time without restarting the database. The maximum memory size serves as an upper limit so that you do not accidentally set the target memory size too high. Because certain SGA components either cannot easily shrink or must remain at a minimum size, the database also prevents you from setting the target memory size too low.
289
Oracle Database 11g Memory Parameters
MEMORY_MAX_TARGET SGA_MAX_SIZE MEMORY_TARGET SGA_TARGET PGA_AGGREGATE_TARGET SHARED_POOL_SIZE DB_CACHE_SIZE LARGE_POOL_SIZE JAVA_POOL_SIZE STREAMS_POOL_SIZE Others DB_KEEP_CACHE_SIZE DB_RECYCLE_CACHE_SIZE DB_nK_CACHE_SIZE LOG_BUFFER RESULT_CACHE_SIZE Oracle Database 11g Memory Parameters The slide displays the memory initialization parameters hierarchy. Although you only have to set MEMORY_TARGET to trigger Automatic Memory Management, you still have the ability to set lower-bound values for various caches. So if the child parameters are set by the user, they will be the minimum values below which that component is not auto-tuned.
290
Automatic Memory Parameter Dependency
MMT=MT MT>0 MMT=0 N N MT=0 MMT>0 Y ST>0 & PAT>0 ST+PAT<=MT<=MMT Y MT can be dynamically changed later. Minimum possible values N MT=0 Y ST>0 & PAT=0 PAT=MT-ST SGA and PGA are separately auto-tuned. N Y ST>0 Y ST=min(MT-PAT,SMS) ST=0 & PAT>0 N Only PGA is auto-tuned. N ST=60%MT PAT=40%MT SGA and PGA cannot grow and shrink automatically. Automatic Memory Parameter Dependency The slide illustrates the relationships among the various memory sizing parameters. If MEMORY_TARGET is set to a nonzero value: If SGA_TARGET and PGA_AGGREGATE_TARGET are set, they will be considered to be minimum values for the sizes of SGA and PGA, respectively. MEMORY_TARGET can take values from SGA_TARGET + PGA_AGGREGATE_TARGET to MEMORY_MAX_SIZE. If SGA_TARGET is set and PGA_AGGREGATE_TARGET is not set, we still auto-tune both the parameters. PGA_AGGREGATE_TARGET is initialized to a value of (MEMORY_TARGET - SGA_TARGET). If PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set, we still auto-tune both the parameters. SGA_TARGET is initialized to a value of min(MEMORY_TARGET -PGA_AGGREGATE_TARGET, SGA_MAX_SIZE (if set by the user)) and will auto-tune subcomponents. If neither is set, they are auto-tuned without minimum or default values. We have a policy of distributing the total server memory in a fixed ratio to the SGA and PGA during initialization. The policy is to distribute 60% to SGA and 40% to PGA at startup. Both SGA and PGA can grow and shrink automatically.
291
Notes only page Automatic Memory Parameter Dependency (continued)
If MEMORY_TARGET is not set, or if it is explicitly set to 0 (default value is 0 for 11g): If SGA_TARGET is set, the system auto-tunes only the sizes of the subcomponents of the SGA. PGA is auto-tuned independently of whether it is explicitly set or not. However, the whole SGA (SGA_TARGET) and PGA (PGA_AGGREGATE_TARGET) are not auto-tuned (do not grow or shrink automatically). If neither SGA_TARGET nor PGA_AGGREGATE_TARGET is set, we follow the same policy as we have now: PGA is auto-tuned and the SGA is not auto-tuned, and parameters for some of the subcomponents must be set explicitly (for SGA_TARGET). If only MEMORY_MAX_TARGET is set, MEMORY_TARGET defaults to 0 in manual setup using the text initialization file. Auto-tuning defaults to 10g R2 behavior for SGA and PGA. If SGA_MAX_SIZE is not user set, it is set internally to MEMORY_MAX_TARGET if user sets MEMORY_MAX_TARGET (independent of SGA_TARGET being user set). In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and include a value for MEMORY_TARGET, the database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a nonzero value if it does not exceed the value of MEMORY_MAX_TARGET. Legend: In the slide, use the following list to translate abbreviations to parameter names: MT = MEMORY_TARGET MMT = MEMORY_MAX_TARGET ST = SGA_TARGET PAT = PGA_AGGREGATE_TARGET SMS = SGA_MAX_SIZE Notes only page
292
Enabling Automatic Memory Management
You can enable Automatic Memory Management by using Enterprise Manager, as shown in the slide. From the Database home page, click the Server tab. On the Server page, click the Memory Advisors link in the Database Configuration section. This takes you to the Memory Advisors page. On this page, you can click the Enable button to enable Automatic Memory Management. The value in the “Total Memory Size for Automatic Memory Management” field is set by default to the current SGA + PGA size. You can set it to anything more than this but less than the value in Maximum Memory Size. Note: On the Memory Advisors page, you can also specify the Maximum Memory Size. If you change this field, the database must be automatically restarted for your change to take effect.
293
Monitoring Automatic Memory Management
When Automatic Memory Management is enabled, you see a new graphical representation of the history of your memory size components in the Allocation History section of the Memory Parameters page. The green part in the first graphic represents the PGA and the brownish-orange part is all of the SGA. The dark blue below in the lower histogram is the Shared Pool size; light blue corresponds to Buffer Cache. The change in the slide displays the possible repartition of memory after the execution of the various demanding queries. Both SGA and PGA might therefore shrink. Note that with SGA shrink, its subcomponents also shrink around the same time. On this page, you can also access the memory target advisor by clicking the Advice button. This advisor gives you the possible DB time improvement for various total memory sizes. Note: V$MEMORY_TARGET_ADVICE displays the tuning advice for the MEMORY_TARGET initialization parameter.
294
Monitoring Automatic Memory Management
If you want to monitor the decisions made by Automatic Memory Management from the command line: V$MEMORY_DYNAMIC_COMPONENTS has the current status of all memory components V$MEMORY_RESIZE_OPS has a circular history buffer of the last 800 completed memory resize requests V$MEMORY_CURRENT_RESIZE_OPS has current memory resize operations All SGA and PGA equivalents are still in place for backward compatibility Monitoring Automatic Memory Management (continued) The following views provide information about dynamic resize operations: V$MEMORY_DYNAMIC_COMPONENTS displays information about the current sizes of all dynamically tuned memory components, including the total sizes of the SGA and PGA. V$MEMORY_RESIZE_OPS displays information about the last 800 completed memory resize operations (both automatic and manual). This does not include in-progress operations. V$MEMORY_CURRENT_RESIZE_OPS displays information about the memory resize operations (both automatic and manual) that are currently in progress. V$SGA_CURRENT_RESIZE_OPS displays information about SGA resize operations that are currently in progress. An operation can be a grow or a shrink of a dynamic SGA component. V$SGA_RESIZE_OPS displays information about the last 800 completed SGA resize operations. This does not include operations currently in progress. V$SGA_DYNAMIC_COMPONENTS displays information about the dynamic components in SGA. This view summarizes information based on all completed SGA resize operations since startup. V$SGA_DYNAMIC_FREE_MEMORY displays information about the amount of SGA memory that is available for future dynamic SGA resize operations.
295
DBCA and Automatic Memory Management
With Oracle Database 11g, DBCA has new options to accommodate Automatic Memory Management (AMM). Use the Memory tab of the Initialization Parameters page to set the initialization parameters that control how the database manages its memory usage. You can choose from two basic approaches to memory management: Typical: Requires very little configuration and allows the database to manage how it uses a percentage of your overall system memory. Select Typical to create a database with minimal configuration or user input. This option is sufficient for most environments and for DBAs who are inexperienced with advanced database creation procedures. Enter a value in megabytes in the Memory Size field. To use AMM, select the corresponding option in the Typical section of the page. Click Show Memory Distribution to see how much memory the DBCA assigns to both SGA and PGA when you do not select the AMM option. Custom (uses ASMM or not): Requires more configuration but provides you with more control over how the database uses available system memory. To allocate specific amounts of memory to the SGA and PGA, select Automatic. To customize how the SGA memory is distributed among the SGA memory structures (buffer cache, shared pool, and so on), select Manual and enter specific values for each SGA subcomponent. Review and modify these initialization parameters later in DBCA. Note: When you use DBUA or manual DB creation, the MEMORY_TARGET parameter defaults to 0.
296
Statistic Preferences: Overview
Optimizer statistics gathering task Statement level Table level DBA_TAB_STAT_PREFS Schema level Database level Global level CASCADE DEGREE ESTIMATE_PERCENT METHOD_OPT NO_INVALIDATE GRANULARITY PUBLISH INCREMENTAL STALE_PERCENT set_global_prefs set_database_prefs DBMS_STATS set_schema_prefs set | get | delete export | import set_table_prefs DBA gather_*_stats Statistic Preferences: Overview The automated statistics-gathering feature was introduced in Oracle Database 10g, Release 1 to reduce the burden of maintaining optimizer statistics. However, there were cases where you had to disable it and run your own scripts instead. One reason was the lack of object-level control. Whenever you found a small subset of objects for which the default gather statistics options did not work well, you had to lock the statistics and analyze them separately by using your own options. For example, the feature that automatically tries to determine adequate sample size (ESTIMATE_PERCENT=AUTO_SAMPLE_SIZE) does not work well against columns that contain data with very high frequency skews. The only way to get around this issue was to manually specify the sample size in your own script. The Statistic Preferences feature in Oracle Database 11g introduces flexibility so that you can rely more on the automated statistics-gathering feature to maintain the optimizer statistics when some objects require settings that are different from the database default. This feature allows you to associate the statistics-gathering options that override the default behavior of the GATHER_*_STATS procedures and the automated Optimizer Statistics Gathering task at the object or schema level. As a DBA, you can use the DBMS_STATS package to manage the gathering statistics options shown in the slide. exec dbms_stats.set_table_prefs('SH','SALES','STALE_PERCENT','13');
297
Notes only page Statistic Preferences: Overview (continued)
Basically, you can set, get, delete, export, and import those preferences at the table, schema, database, and global levels. Global preferences are used for tables that do not have preferences, whereas database preferences are used to set preferences on all tables. The preference values that are specified in various ways take precedence from the outer circles to the inner ones (as shown in the slide). The last three highlighted options are new in Oracle Database 11g, Release 1: PUBLISH is used to decide whether to publish the statistics to the dictionary or to store them in a pending area before. STALE_PERCENT is used to determine the threshold level at which an object is considered to have stale statistics. The value is a percentage of rows modified since the last statistics gathering. The example changes the 10 percent default to 13 percent for SH.SALES only. INCREMENTAL is used to gather global statistics on partitioned tables in an incremental way. Note: You can describe all the effective statistics preference settings for all relevant tables by using the DBA_TAB_STAT_PREFS view. Notes only page
298
Setting Global Preferences with Enterprise Manager
It is possible to control global preference settings by using Enterprise Manager. You do so on the Manage Optimizer Statistics page, which you access from the Database home page by clicking the Server tab, then the Manage Optimizer Statistics link, and then the Global Statistics Gathering Options link. On the Global Statistics Gathering Options page, change the global preferences in the Gather Optimizer Statistics Default Options section. When finished, click the Apply button. Note: To change the statistics gathering options at the object level or schema level, click the Object Level Statistics Gathering Preferences link on the Manage Optimizer Statistics page.
299
Partitioned Tables and Incremental Statistics: Overview
GRANULARITY=GLOBAL% & INCREMENTAL=FALSE Global statistics … … … Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 … … … Global statistics Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 Q1 1970 Q2 1970 Q1 2007 Partitioned Tables and Incremental Statistics: Overview For a partitioned table, the system maintains both the statistics on each partition and the overall statistics for the table. Generally, if the table is partitioned using a date range value, very few partitions go through data modifications (DML). For example, suppose we have a table that stores the sales transactions. We partition the table on sales date with each partition containing transactions for a quarter. Most of the DML activity happens on the partition that stores the transactions of the current quarter. The data in other partitions remains unchanged. The system currently keeps track of DML monitoring information at the table and (sub)partition levels. Statistics are gathered only for those partitions (in the example in the slide, the partition for the current quarter) that have significantly changed (current threshold is 10%) since the last statistics gathering. However, global statistics are gathered by scanning the entire table, which makes global statistics very expensive on partitioned tables—especially when some partitions are stored in slow devices and not modified often. Oracle Database 11g can expedite the gathering of certain global statistics, such as the number of distinct values. In contrast to the traditional way of scanning the entire table, there is a new mechanism to define global statistics by scanning only those partitions that have been changed and still make use of the statistics gathered before for those partitions that are unchanged. In short, these global statistics can be maintained incrementally. GRANULARITY=GLOBAL% & INCREMENTAL=TRUE
300
Partitioned Tables and Incremental Statistics: Overview (continued)
The DBMS_STATS package currently allows you to specify the granularity on a partitioned table. For example, you can specify auto, global, global and partition, all, partition, and subpartition. If the granularity specified includes GLOBAL and the table is marked as INCREMENTAL for its gathering options, the global statistics are gathered using the incremental mechanism. Moreover, statistics for changed partitions are gathered as well, whether you specified PARTITION in the granularity or not. Note: The new mechanism does not incrementally maintain histograms and density global statistics. Notes only page
301
Hash-Based Sampling for Column Statistics
Computing column statistics is the most expensive step in statistics gathering. The row-sampling technique gives inaccurate results with skewed data distribution. A new approximate counting technique is used when ESTIMATE_PERCENT is set to AUTO_SAMPLE_SIZE. You are encouraged to use AUTO_SAMPLE_SIZE. Otherwise, the old row sample technique is used. Hash-Based Sampling for Column Statistics For query optimization, it is essential to have a good estimate of the number of distinct values. By default, and without histograms, the optimizer uses the number of distinct values to evaluate the selectivity of a predicate of a column. The algorithm used in Oracle Database 10g computes the number of distinct values with a SQL statement counting the number of distinct values found on a sample of the underlying table. With Oracle Database 10g, you have two choices when gathering column statistics: 1. Use a small sample size, which leads to less accurate results but a short execution time. 2. Use a large sample or full scan, which leads to very accurate results but a very long execution time. In Oracle Database 11g, there is a new method for gathering column statistics that provides accuracy similar to a scan with the execution time of a small sample (1% to 5%). This new technique is used when you invoke a procedure from DBMS_STATS with the ESTIMATE_PERCENT gathering option set to AUTO_SAMPLE_SIZE, which is the default value. The row sampling–based algorithm is used for the collection of a number of distinct values if you specify any value other than AUTO_SAMPLE_SIZE. This preserves the old behavior when you specify sampling percentage.
302
Notes only page Hash-Based Sampling for Column Statistics (continued)
Note: With Oracle Database 11g, you are encouraged to use AUTO_SAMPLE_SIZE. The new evaluation mechanism fixes the following most encountered issues in Oracle Database 10g: The auto option stops too early and generates inaccurate statistics, and the user would specify a higher sample size than the one used by auto. The auto option stops too late and the performance is bad, and the user would specify a lower sample size than the one used by auto. Notes only page
303
Multicolumn Statistics: Overview
VEHICLE MAKE MODEL 1 S(MAKE Λ MODEL)=S(MAKE)xS(MODEL) select dbms_stats.create_extended_stats('jfv','vehicle','(make,model)') from dual; 2 exec dbms_stats.gather_table_stats('jfv','vehicle',- method_opt=>'for all columns size 1 for columns (make,model) size 3'); 3 DBA_STAT_EXTENSIONS VEHICLE SYS_STUF3GLKIOP5F4B0BTTCFTMX0W MAKE MODEL 4 Multicolumn Statistics: Overview With Oracle Database 10g, the query optimizer takes into account correlation between columns when computing the selectivity of multiple predicates in the following limited cases: If all the columns of a conjunctive predicate match all the columns of a concatenated index key, and the predicates are equalities used in equijoins, then the optimizer uses the number of distinct keys (NDK) in the index for estimating selectivity, as 1/NDK. When DYNAMIC_SAMPLING is set to level 4, the query optimizer uses dynamic sampling to estimate the selectivity of complex predicates involving several columns from the same table. However, the sample size is very small and increases parsing time. As a result, the sample is likely to be statistically inaccurate and may cause more harm than good. In all other cases, the optimizer assumes that the values of columns used in a complex predicate are independent of each other. It estimates the selectivity of a conjunctive predicate by multiplying the selectivity of individual predicates. This approach always results in under-estimation of the selectivity. To circumvent this issue, Oracle Database 11g allows you to collect, store, and use the following statistics to capture functional dependency between two or more columns (also called groups of columns): number of distinct values, number of nulls, frequency histograms, and density. S(MAKE Λ MODEL)=S(MAKE,MODEL)
304
Notes only page Multicolumn Statistics: Overview (continued)
For example, consider a VEHICLE table in which you store information about cars. Columns MAKE and MODEL are highly correlated in that MODEL determines MAKE. This is a strong dependency, and both columns should be considered by the optimizer as highly correlated. You can signal that correlation to the optimizer by using the CREATE_EXTENDED_STATS function as shown in the example in the slide, and then compute the statistics for all columns (including the ones for the correlated groups that you created). Notes The CREATE_EXTENDED_STATS function returns a virtual hidden column name such as SYS_STUW_5RHLX443AN1ZCLPE_GLE4. Based on the example in the slide, the name can be determined by using the following SQL: select dbms_stats.show_extended_stats_name('jfv','vehicle','(make,model)') from dual After creation, you can retrieve the statistics extensions by using the ALL|DBA|USER_STAT_EXTENSIONS views. Notes only page
305
Expression Statistics: Overview
CREATE INDEX upperidx ON VEHICLE(upper(MODEL)) VEHICLE MODEL VEHICLE MODEL Still possible Recommended VEHICLE DBA_STAT_EXTENSIONS S(upper( MODEL))=0.01 MODEL SYS_STU3FOQ$BDH0S_14NGXFJ3TQ50 select dbms_stats.create_extended_stats('jfv','vehicle','(upper(model))') from dual; Expression Statistics: Overview Predicates involving expressions on columns are a significant issue for the query optimizer. When computing selectivity on predicates of the form function(Column) = constant, the optimizer assumes a static selectivity value of 1 percent. Obviously, this approach is wrong and causes the optimizer to produce suboptimal plans. The query optimizer has been extended to better handle such predicates in limited cases where functions preserve the data distribution characteristics of the column and thus allow the optimizer to use the columns statistics. An example of such a function is TO_NUMBER. Further enhancements have been made to evaluate built-in functions during query optimization to derive better selectivity using dynamic sampling. Finally, the optimizer collects statistics on virtual columns created to support function-based indexes. However, these solutions are either limited to a certain class of functions or work only for expressions used to create function-based indexes. By using expression statistics in Oracle Database 11g, you can use a more general solution that includes arbitrary user-defined functions and does not depend on the presence of function-based indexes. As shown in the example in the slide, this feature relies on the virtual column infrastructure to create statistics on expressions of columns. exec dbms_stats.gather_table_stats('jfv','vehicle',- method_opt=>'for all columns size 1 for columns (upper(model)) size 3');
306
Deferred Statistics Publishing: Overview
PROD OPTIMIZER_USE_PENDING_STATISTICS=TRUE OPTIMIZER_USE_PENDING_STATISTICS=FALSE Dictionary statistics Pending statistics PUBLISH=FALSE + GATHER_*_STATS DBA_TAB_PENDING_STATS IMPORT_TABLE_STATS expdp/impdp PUBLISH_PENDING_STATS EXPORT_PENDING_STATS Deferred Statistics Publishing: Overview By default, the statistics gathering operation automatically stores the new statistics in the data dictionary each time it completes the iteration for one object (table, partition, subpartition, or index). The optimizer sees them as soon as they are written to the data dictionary, and these new statistics are called current statistics. This automatic publishing can be frustrating to the DBA, who is never sure of the aftermath of the new statistics—days or even weeks later. In addition, the statistics used by the optimizer can be inconsistent if, for example, table statistics are published before the statistics of its indexes, partitions or subpartitions. To avoid these potential issues, in Oracle Database 11g, Release 1, you can separate the gathering step from the publication step for optimizer statistics. There are two benefits in separating the two steps: Supports the statistics gathering operation as an atomic transaction. The statistics of all tables and dependent objects (indexes, partitions, subpartitions) in a schema will be published at the same time. This new model has two beneficial properties: The optimizer will always have a consistent view of the statistics, and if for some reason the gathering step fails during the gathering process, it will be able to resume from where it left off when it is restarted by using the DBMS_STAT.RESUME_GATHER_STATS procedure. Allows DBAs to validate the new statistics by running all or part of the workload using the newly gathered statistics on a test system and, when satisfied with the test results, to proceed to the publishing step to make them current in the production environment. TEST
307
Notes only page Deferred Statistics Publishing: Overview (continued)
When you specify the PUBLISH to FALSE gather option, gathered statistics are stored in the pending statistics tables instead of being current. These pending statistics are accessible from a number of views: {ALL|DBA|USER}_{TAB|COL|IND|TAB_HISTGRM}_PENDING_STATS. To test the pending statistics, you have two options: Transfer the pending statistics to your own statistics table by using the new DBMS_STAT.EXPORT_PENDING_STATS procedure, export your statistics table to a test system from where you can import it back, and then render the pending statistics current by using the DBMS_STAT.IMPORT_TABLE_STATS procedure. Enable session-pending statistics by altering your session initialization parameter OPTIMIZER_USE_PENDING_STATISTICS to TRUE. By default, this new initialization parameter is set to FALSE. This means that in your session, you parse SQL statements by using the current optimizer statistics. By setting it to TRUE in your session, you switch to the pending statistics instead. When you have tested the pending statistics and are satisfied with them, you can publish them as current in your production environment by using the new DBMS_STAT.PUBLISH_PENDING_STATS procedure. Note: For more information about the DBMS_STATS package, see the PL/SQL Packages and Types Reference. Notes only page
308
Deferred Statistics Publishing: Example
exec dbms_stats.set_table_prefs('SH','CUSTOMERS','PUBLISH','false'); 1 exec dbms_stats.gather_table_stats('SH','CUSTOMERS'); 2 alter session set optimizer_use_pending_statistics = true; 3 Execute your workload from the same session. 4 exec dbms_stats.publish_pending_stats('SH','CUSTOMERS'); 5 Deferred Statistics Publishing: Example 1. Use the SET_TABLE_PREFS procedure to set the PUBLISH option to FALSE. This prevents the next statistics gathering operation from automatically publishing statistics as current. According to the first statement, this is true for the SH.CUSTOMERS table only. 2. Gather statistics for the SH.CUSTOMERS table in the pending area of the dictionary. 3. Test the new set of pending statistics from your session by setting the OPTIMIZER_USE_PENDING_STATISTICS to TRUE. 4. Issue queries against SH.CUSTOMERS. 5. If you are satisfied with the test results, use the PUBLISH_PENDING_STATS procedure to render the pending statistics for SH.CUSTOMERS current. Note: To analyze the differences between the pending statistics and the current ones, you could export the pending statistics to your own statistics table and then use the new DBMS_STAT.DIFF_TABLE_STATS function.
309
Summary In this lesson, you should have learned how to:
Use the new features of ADDM Use Automatic Memory Management Use statistics enhancements
310
Practice 8: Overview This practice covers the following topics:
Using Automatic Memory Management Using deferred optimizer statistics
311
Partitioning and Storage-Related Enhancements
312
Objectives After completing this lesson, you should be able to:
Implement the new partitioning methods Employ data compression Create a SQL Access Advisor analysis session using Enterprise Manager Create a SQL Access Advisor analysis session using PL/SQL Set up a SQL Access Advisor analysis to get partition recommendations
313
Oracle Partitioning Oracle8 Oracle8i Oracle9i Oracle9i R2 Oracle10g
Core functionality Performance Manageability Oracle8 Range partitioning Global range indexes Static partition pruning Basic maintenance operations: add, drop, exchange Oracle8i Hash and composite range-hash partitioning Partitionwise joins Dynamic pruning Merge operation Oracle9i List partitioning Global index maintenance Oracle9i R2 Composite range-list partitioning Fast partition split Oracle10g Global hash indexes Local Index maintenance Oracle10g R2 1M partitions per table Multidimensional pruning Fast drop table Oracle Database 11g More composite choices REF Partitioning Virtual Column Partitioning Interval Partitioning Partition Advisor Oracle Partitioning The slide summarizes the ten years of partitioning development at Oracle. Note: REF partitioning enables pruning and partitionwise joins against child tables. While performance seems to be the most visible improvement, do not forget about the rest. Partitioning must address all business-relevant areas of performance, manageability, and availability.
314
Partitioning Enhancements
Interval partitioning System partitioning Composite partitioning enhancements Virtual column-based partitioning Reference partitioning Partitioning Enhancements Partitioning is an important tool for managing large databases. Partitioning allows the DBA to employ a “divide and conquer” methodology for managing database tables, especially as those tables grow. Partitioned tables allow a database to scale for very large data sets while maintaining consistent performance, without unduly impacting administrative or hardware resources. Partitioning enables faster data access within an Oracle database. Whether a database has 10 GB or 10 TB of data, partitioning can speed up data access by orders of magnitude. With the introduction of Oracle Database 11g, the DBA will find a useful assortment of partitioning enhancements. These enhancements include: Addition of interval partitioning Addition of system partitioning Composite partitioning enhancements Addition of virtual column-based partitioning Addition of reference partitioning
315
Interval Partitioning
Interval partitioning is an extension of range partitioning. Partitions of a specified interval are created when inserted data exceeds all of the range partitions. At least one range partition must be created. Interval partitioning automates the creation of range partitions. Interval Partitioning Before the introduction of interval partitioning, the DBA was required to explicitly define the range of values for each partition. The problem is explicitly defining the bounds for each partition does not scale as the number of partitions grow. Interval partitioning is an extension of range partitioning, which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the range partitions. You must specify at least one range partition. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database creates interval partitions for data beyond that transition point. Interval partitioning fully automates the creation of range partitions. Managing the creation of new partitions can be a cumbersome and highly repetitive task. This is especially true for predictable additions of partitions covering small ranges, such as adding new daily partitions. Interval partitioning automates this operation by creating partitions on demand. When using interval partitioning, consider the following restrictions: You can specify only one partitioning key column, and it must be of NUMBER or DATE type. Interval partitioning is not supported for index-organized tables. You cannot create a domain index on an interval-partitioned table.
316
Interval Partitioning: Example
CREATE TABLE SH.SALES_INTERVAL PARTITION BY RANGE (time_id) INTERVAL (NUMTOYMINTERVAL(1,'month')) STORE IN (tbs1,tbs2,tbs3,tbs4) ( PARTITION P1 values less than (TO_DATE(' ','dd-mm-yyyy')), PARTITION P2 values less than (TO_DATE(' ','dd-mm-yyyy')), PARTITION P3 values less than (TO_DATE(' ','dd-mm-yyyy'))) AS SELECT * FROM SH.SALES WHERE TIME_ID < TO_DATE(' ','dd-mm-yyyy'); Automatically created when you insert data P1 P2 P3 Pi1 … Pin Interval Partitioning: Example Consider the example in the slide, which illustrates the creation of an interval-partitioned table. The original CREATE TABLE statement specifies four partitions with varying widths. This portion of the table is range-partitioned. It also specifies that above the transition point of “ ,” partitions are created with a width of one month. These partitions are interval-partitioned. Partition Pi1 is automatically created using this information when a row with a value corresponding to January 2004 is inserted into the table. The high bound of partition P3 represents a transition point. P3 and all partitions below it (P1 and P2 in this example) are in the range section, while all partitions above it fall into the interval section. The only argument to the INTERVAL clause is a constant of the interval type. Currently, you can specify only one partitioning key column, and it must be of DATE or NUMBER type. You can use the optional STORE IN clause of the INTERVAL clause to specify one or more tablespaces into which the database will store interval partition data in a round-robin fashion. Range section Interval section Transition point
317
Moving the Transition Point: Example
PREVIOUS < 01/01/07 INSERT INTO orders_interval (…); Transition point PREVIOUS Not yet materialized SYS_Px SYS_Py SYS_Pz SYS_Pt < 01/01/07 < 01/08/06 < 01/11/06 < 01/12/06 < 01/03/08 Transition point alter table orders_interval merge partitions for(TO_DATE(' ','dd-mm-yyyy')),for(TO_DATE(' ','dd-mm-yyyy')) into partition sys_p5z; PREVIOUS SYS_Px SYS_Pz SYS_Pt Moving the Transition Point: Example The graphic in the slide shows you a typical Information Lifecycle Management (ILM) scenario where after one year of automated partition creation, you merge the created partitions (SYS_Py and SYS_Pz in the example) to move the transition point. You can then move the resulting partitions to a different storage for ILM purposes. The example assumes that you created a table called ORDERS_INTERVAL that has initially one range partition called PREVIOUS, which holds orders from before The interval is defined to be one month. Then during the year 2007 and 2008, some orders are inserted, and it is assumed that four partitions are created. They are shown on the graphic. They are automatically named according to some system rules. Then you decide to merge the last two partitions of the year 2007 using the ALTER TABLE statement shown in the slide. You must merge two adjacent partitions. Note the new extended partition syntax that can be used to designate a partition without knowing its name. The syntax uses an expression that must represent a possible value for the partition in question. This syntax works for all cases when you have to reference a partition, whether it be range, list, interval, or hash. It supports all operations such as drop, merge, split, and so on. As a result of your MERGE operation, you can see that the transition point moved. The bottom part of the graphic shows you the new range section that now contains three partitions. Note: You can change the interval of an interval-partitioned table; the existing intervals remain unaffected. < 01/01/06 < 01/08/06 < 01/12/06 < 01/03/08 Transition point
318
System Partitioning System partitioning:
Enables application-controlled partitioning for selected tables Provides the benefits of partitioning but the partitioning and data placement are controlled by the application Does not employ partitioning keys like other partitioning methods Does not support partition pruning in the traditional sense System Partitioning System partitioning enables application-controlled partitioning for arbitrary tables. This is mainly useful when you develop your own partitioned domain indexes. The database simply provides the ability to break down a table into meaningless partitions. All other aspects of partitioning are controlled by the application. System partitioning provides the well-known benefits of partitioning (scalability, availability, and manageability), but the partitioning and actual data placement are controlled by the application. The most fundamental difference between system partitioning and other methods is that system partitioning does not have any partitioning keys. Consequently, the distribution or mapping of the rows to a particular partition is not implicit. Instead, the user specifies the partition to which a row maps by using partition-extended syntax when inserting a row. Because system-partitioned tables do not have a partitioning key, the usual performance benefits of partitioned tables are not available for system-partitioned tables. Specifically, there is no support for traditional partition pruning, partitionwise joins, and so on. Partition pruning is achieved by accessing the same partitions in the system-partitioned tables as those that were accessed in the base table. System-partitioned tables provide the manageability advantages of equipartitioning. For example, a nested table can be created as a system-partitioned table that has the same number of partitions as the base table. A domain index can be backed up by a system-partitioned table that has the same number of partitions as the base table.
319
System Partitioning: Example
CREATE TABLE systab (c1 integer, c2 integer) PARTITION BY SYSTEM ( PARTITION p1 TABLESPACE tbs_1, PARTITION p2 TABLESPACE tbs_2, PARTITION p3 TABLESPACE tbs_3, PARTITION p4 TABLESPACE tbs_4 ); INSERT INTO systab PARTITION (p1) VALUES (4,5); INSERT INTO systab PARTITION (p2) VALUES (150,2); alter table systab merge partitions p1,p2 into partition p1; System Partitioning: Example The syntax in the slide example creates a table with four partitions. Each partition can have different physical attributes. INSERT and MERGE statements must use partition-extended syntax to identify a particular partition that a row should go into. For example, the value (4,5) can be inserted into any one of the four partitions given in the example. Deletes and updates do not require the partition-extended syntax. However, because there is no partition pruning, if the partition-extended syntax is omitted, the entire table is scanned to execute the operation. Again, this example highlights the fact that there is no implicit mapping from rows to any partition.
320
System Partitioning: Guidelines
The following operations are supported for system-partitioned tables: Partition maintenance operations and other DDL operations Creation of local indexes Creation of local bitmapped indexes Creation of global indexes All DML operations INSERT SELECT with partition-extended syntax: INSERT INTO <table_name> PARTITION(<partition-name>) <subquery> System Partitioning: Guidelines The following operations are supported for system-partitioned tables: Partition maintenance operations and other DDLs (see exceptions below) Creation of local indexes Creation of local bitmapped indexes Creation of global indexes All DML operations INSERT SELECT with partition-extended syntax. Because of the peculiar requirements of system partitioning, the following operations are not supported for system partitioning: Unique local indexes are not supported because they require a partitioning key. CREATE TABLE AS SELECT is not supported because there is no partitioning method. It is not possible to distribute rows to partitions. Instead, you should first create the table and then insert rows into each partition. SPLIT PARTITION operations
321
Virtual Column–Based Partitioning
Virtual column values are derived by the evaluation of a function or expression. Virtual columns can be defined within a CREATE or ALTER table operation. Virtual column values are not physically stored in the table row on disk, but are evaluated on demand. Virtual columns can be indexed, and used in queries, DML, and DDL statements like other table column types. Tables and indexes can be partitioned on a virtual column and even statistics can be gathered upon them. CREATE TABLE employees (employee_id number(6) not null, … total_compensation as (salary *( 1+commission_pct)) Virtual Column–Based Partitioning Columns of a table whose values are derived by computation of a function or an expression are known as virtual columns. These columns can be specified during a CREATE or ALTER table operation. Virtual columns share the same SQL namespace as other real table columns and conform to the data type of the underlying expression that describes it. These columns can be used in queries like any other table columns, thereby providing a simple, elegant, and consistent mechanism of accessing expressions in a SQL statement. The values for virtual columns are not physically stored in the table row on disk, rather they are evaluated on demand. The functions or expressions describing the virtual columns should be deterministic and pure, meaning the same set of input values should return the same output values. Virtual columns can be used like any other table columns. They can be indexed, and used in queries, DML, and DDL statements. Tables and indexes can be partitioned on a virtual column and even statistics can be gathered upon them. You can use virtual column partitioning to partition key columns defined on virtual columns of a table. Frequently, business requirements to logically partition objects do not match existing columns in a one-to-one manner. With the introduction of Oracle Database 11g, partitioning has been enhanced to allow a partitioning strategy defined on virtual columns, thus enabling a more comprehensive match of the business requirements.
322
Virtual Column–Based Partitioning: Example
CREATE TABLE employees (employee_id number(6) not null, first_name varchar2(30), last_name varchar2(40) not null, varchar2(25), phone_number varchar2(20), hire_date date not null, job_id varchar2(10) not null, salary number(8,2), commission_pct number(2,2), manager_id number(6), department_id number(4), total_compensation as (salary *( 1+commission_pct)) ) PARTITION BY RANGE (total_compensation) ( PARTITION p1 VALUES LESS THAN (50000), PARTITION p2 VALUES LESS THAN (100000), PARTITION p3 VALUES LESS THAN (150000), PARTITION p4 VALUES LESS THAN (MAXVALUE) ); Virtual Column-Based Partitioning: Example Consider the example in the slide. The EMPLOYEES table is created using the standard CREATE TABLE syntax. The total_compensation column is a virtual column calculated by multiplying the value of salary by the commission_pct plus one. The next statement declares total_compensation (a virtual column) to be the partitioning key of the EMPLOYEES table. Partition pruning takes place for virtual column partition keys when the predicates on the partitioning key are of the following types: Equality or Like List Range Partition-extended names Given a join operation between two tables, the optimizer recognizes when a partitionwise join (full or partial) is applicable, decides whether to use it or not, and annotates the join properly when it decides to use it. This applies to both serial and parallel cases. In order to recognize full partitionwise joins, the optimizer relies on the definition of equipartitioning of two objects; this definition includes the equivalence of the virtual expression on which the tables were partitioned.
323
Reference Partitioning
A table can now be partitioned based on the partitioning method of a table referenced in its referential constraint. The partitioning key is resolved through an existing parent/child relationship. The partitioning key is enforced by active primary key or foreign key constraints. Tables with a parent/child relationship can be equipartitioned by inheriting the partitioning key from the parent table without duplicating the key columns. Partitions are automatically maintained. Reference Partitioning Reference partitioning provides the ability to partition a table based on the partitioning scheme of the table referenced in its referential constraint. The partitioning key is resolved through an existing parent/child relationship, which is enforced by active primary key or foreign key constraints. The benefit of this is that tables with a parent/child relationship can be logically equipartitioned by inheriting the partitioning key from the parent table without duplicating the key columns. The logical dependency also automatically cascades partition maintenance operations, making application development easier and less error prone.
324
Reference Partitioning: Benefit
Without using reference partitioning Reference partitioning … Range(ORDER_DATE) Primary key (ORDER_ID) … Table ORDERS Table ORDER_ITEMS … Range(ORDER_DATE) Foreign key (ORDER_ID) … Reference Partitioning: Benefit As illustrated in the slide, you can see the benefit of using reference partitioning. The left part of the graphic shows you the situation where you have two tables, ORDERS and ORDER_ITEMS, that are equipartitioned on the ORDER_DATE column. In that case, both tables need to define the ORDER_DATE column. However, defining ORDER_DATE in the ORDER_ITEMS table is redundant because there is a primary key/foreign key relationship between the two tables. The right part of the graphic shows you the situation where you use reference partitioning. This time, you no longer need to define the ORDER_DATE column in the ORDER_ITEMS table. The partition key of the ORDER_ITEMS table is automatically inherited from the primary key/foreign key relationship that exists. When used for pruning and partitionwise joins, reference partitioning has the benefit that query predicates can be different and partitionwise joins still work—for example, partitioning on ORDER_DATE and search on ORDER_ID. With previous releases, both partitioning and predicates had to be identical for a partitionwise join to work. Note: This partitioning method can be useful for nested table partitioning. Redundant storage/maintenance of ORDER_DATE Partition key inherited through PK/FK relationship
325
Reference Partitioning: Example
CREATE TABLE orders ( order_id NUMBER(12) , order_date DATE, order_mode VARCHAR2(8), customer_id NUMBER(6), order_status NUMBER(2) , order_total NUMBER(8,2), sales_rep_id NUMBER(6) , promotion_id NUMBER(6), CONSTRAINT orders_pk PRIMARY KEY(order_id) ) PARTITION BY RANGE(order_date) (PARTITION Q105 VALUES LESS THAN (TO_DATE(' ','DD-MM-YYYY')), PARTITION Q205 VALUES LESS THAN (TO_DATE(' ','DD-MM-YYYY')), PARTITION Q305 VALUES LESS THAN (TO_DATE(' ','DD-MM-YYYY')), PARTITION Q405 VALUES LESS THAN (TO_DATE(' ','DD-MM-YYYY'))); CREATE TABLE order_items ( order_id NUMBER(12) NOT NULL, line_item_id NUMBER(3) NOT NULL, product_id NUMBER(6) NOT NULL, unit_price NUMBER(8,2), quantity NUMBER(8), CONSTRAINT order_items_fk FOREIGN KEY(order_id) REFERENCES orders(order_id) ) PARTITION BY REFERENCE(order_items_fk); Reference Partitioning: Example The example in the slide creates two tables: ORDERS: Range-partitioned table partitioned on order_date. It is created with four partitions, Q105, Q205, Q305, and Q405. ORDER_ITEMS: Reference-partitioned child table: This table is created with four partitions—Q105, Q205, Q305, and Q405—with each containing rows corresponding to ORDERS in the respective parent partition. If partition descriptors are provided, the number of partitions described must be exactly equal to the number of partitions or subpartitions in the referenced table. If the parent table is a composite-partitioned table, then the table will have one partition for each subpartition of its parent. Partition bounds cannot be specified for the partitions of a reference-partitioned table. The partitions of a reference-partitioned table can be named unless there is a conflict with inherited names. In this case, the partition will have a system-generated name. Partitions of a reference-partitioned table will collocate with the corresponding partition of the parent table, if no explicit tablespace is specified. As with other partitioned tables, you can specify object-level default attributes, and partition descriptors that override object-level defaults. It is not possible to disable the foreign key constraint of a reference-partitioned table. It is not permitted to add or drop partitions of a reference-partitioned table. However, performing partition maintenance operations on the parent table is automatically cascaded to the child table.
326
Composite Partitioning Enhancements
Range top level Range-Range List top level List-List List-Hash List-Range Interval top level Interval-Range Interval-List Interval-Hash RANGE, LIST, INTERVAL … SP1 SP1 SP1 SP1 SP1 … SP2 SP2 SP2 SP2 SP2 … SP3 SP3 SP3 SP3 SP3 … SP4 SP4 SP4 SP4 SP4 Composite Partitioning Enhancements Before the release of Oracle Database 11g, the only composite partitioning methods supported were range-list and range-hash. With this new release, list partitioning can be a top-level partitioning method for composite partitioned tables giving us list-list, list-hash, list-range, and range-range composite methods. With the introduction of interval partitioning, interval-range, interval-list, and interval-hash are now supported composite partitioning methods. Range-Range Partitioning Composite range-range partitioning enables logical range partitioning along two dimensions; for example, range partition by order_date and range subpartition by shipping_date. List-Range Partitioning Composite list-range partitioning enables logical range subpartitioning within a given list partitioning strategy; for example, list partition by country_id and range subpartition by order_date. List-Hash Partitioning Composite list-hash partitioning enables hash subpartitioning of a list-partitioned object; for example, to enable partitionwise joins. List-List Partitioning Composite list-list partitioning enables logical list partitioning along two dimensions; for example, list partition by country_id and list subpartition by sales_channel. LIST, RANGE, HASH
327
Range-Range Partitioning: Example
CREATE TABLE sales ( prod_id NUMBER(6) NOT NULL, cust_id NUMBER NOT NULL, time_id DATE NOT NULL, channel_id char(1) NOT NULL, promo_id NUMBER (6) NOT NULL, quantity_sold NUMBER(3) NOT NULL, amount_sold NUMBER(10,2) NOT NULL ) PARTITION BY RANGE (time_id) SUBPARTITION BY RANGE (cust_id) SUBPARTITION TEMPLATE ( SUBPARTITION sp1 VALUES LESS THAN (50000), SUBPARTITION sp2 VALUES LESS THAN (100000), SUBPARTITION sp3 VALUES LESS THAN (150000), SUBPARTITION sp4 VALUES LESS THAN (MAXVALUE) ) ( PARTITION VALUES LESS THAN (TO_DATE(' ','DD-MM-YYYY')), PARTITION VALUES LESS THAN (TO_DATE(' ','DD-MM-YYYY')), PARTITION VALUES LESS THAN (TO_DATE(' ','DD-MM-YYYY')), PARTITION VALUES LESS THAN (TO_DATE(' ','DD-MM-YYYY')) ); Composite Range-Range Partitioning: Example Composite range-range partitioning enables logical range partitioning along two dimensions. In the example in the slide, the SALES table is created and range-partitioned on time_id. Using a subpartition template, the SALES table is subpartitioned by range using cust_id as the subpartition key. Because of the template, all partitions have the same number of subpartitions with the same bounds as defined by the template. If no template is specified, a single default partition bound by MAXVALUE (range) or DEFAULT value (list) is created. Although the example illustrates the range-range methodology, the other new composite partitioning methods use similar syntax and statement structure. All of the composite partitioning methods fully support the existing partition pruning methods for queries involving predicates on the subpartitioning key.
328
Table Compression: Overview
Oracle Database 11g extends compression for OLTP data. Support for conventional DML operations (INSERT, UPDATE, DELETE) New algorithm significantly reduces write overhead. Batched compression ensures no impact for most OLTP transactions. No impact on reads Reads may actually see improved performance due to fewer I/Os and enhanced memory efficiency. Table Compression: Overview The Oracle database was the pioneer in terms of compression technology for databases with the introduction of table compression for bulk load operations in Oracle9i. Using this feature, you could compress data at the time of performing bulk load using operations such as direct loads, or Create Table As Select (CTAS). However, until now, compression was not available for regular data manipulation operations such as INSERT, UPDATE, and DELETE. Oracle Database 11g extends the compression technology to support these operations as well. Consequently, compression in Oracle Database 11g can be used for all kinds of workload, be it online transaction processing (OLTP) or data warehousing. It is important to mention that table compression enhancements introduced in Oracle database 11g are not just incremental changes. An enormous amount of work has gone into making sure that the new compression technology has negligible impact on updates because any noticeable write time penalty due to compression will not be acceptable in an OLTP environment. As a result, compression technology in Oracle Database 11g is very efficient and could reduce the space consumption by 50–75%. And while you do that, not only your write performance does not degrade, but also your read performance or queries improve. This is because unlike desktop-based compression techniques where you have to wait for data to be uncompressed, Oracle technology reads the compressed data (less fetches needed) directly and does not require any uncompress operation. Note: Compression technology is completely application transparent. This means that you can use this technology with any homegrown or packaged application such as SAP, Siebel, EBS, and so on.
329
Table Compression Concepts
Inserts are again uncompressed. Compressed data PCTFREE reached triggers compression. Uncompressed data PCTFREE reached triggers compression. Data block Header PCTFREE limit Free space Inserts are uncompressed. Table Compression Concepts The slide shows you a data block evolution when that block is part of a compressed table. You should read it from left to right. At the start, the block is empty and available for inserts. When you start inserting into this block, data is stored in an uncompressed format (like for uncompressed tables). However, as soon as you reach the PCTFREE of that block, the data is automatically compressed, potentially reducing the space it originally occupied. This allows for new uncompressed inserts to take place in the same block, until PCTFREE is reached again. At that point compression is triggered again to reduce space occupation in the block. Note: Compression eliminates holes created due to deletions and maximizes contiguous free space in blocks.
330
Using Table Compression
Requires database compatibility level at 11.1 or greater New syntax extends the COMPRESS keyword: COMPRESS [FOR {ALL | DIRECT_LOAD} OPERATIONS] FOR DIRECT_LOAD is the default: Refers to bulk load operations from prior releases FOR ALL OPERATIONS: OLTP + direct loads Enable compression for new tables: Enable compression on existing table: Does not trigger compression on existing rows CREATE TABLE t1 COMPRESS FOR ALL OPERATIONS; ALTER TABLE t2 COMPRESS FOR ALL OPERATIONS; Using Table Compression To use the new compression algorithm, you must flag your table with the COMPRESS FOR ALL OPERATIONS clause. You can do so at table creation, or after creation. This is illustrated in the examples given in the slide. If you use the COMPRESS clause without specifying any FOR option, or if you use the COMPRESS FOR DIRECT_LOAD OPERATIONS clause, you fall back to the old compression mechanism that was available in earlier releases. You can also enable compression at the partition or tablespace level. For example, you can use the DEFAULT storage clause of the CREATE TABLESPACE command to optionally specify a COMPRESS FOR clause. Note: You can view compression flags for your tables using the COMPRESS and COMPRESS_FOR columns in views such as DBA_TABLES and DBA_TAB_PARTITIONS.
331
SQL Access Advisor: Overview
What partitions, indexes, and MVs do I need to optimize my entire workload? SQL Access Advisor Solution DBA No expertise required Component of CBO Workload Provides implementation script SQL Access Advisor: Overview Defining appropriate access structures to optimize SQL queries has always been a concern for an Oracle DBA. As a result, there have been many papers and scripts written as well as high-end tools developed to address the matter. In addition, with the development of partitioning and materialized view technology, deciding on access structures has become even more complex. As part of the manageability improvements in Oracle Database 10g and 11g, SQL Access Advisor has been introduced to address this very critical need. SQL Access Advisor identifies and helps resolve performance problems relating to the execution of SQL statements by recommending which indexes, materialized views, materialized view logs, or partitions to create, drop, or retain. It can be run from Database Control or from the command line by using PL/SQL procedures. SQL Access Advisor takes an actual workload as input, or the Advisor can derive a hypothetical workload from the schema. It then recommends the access structures for faster execution path. It provides the following advantages: Does not require you to have expert knowledge Bases decision making on rules that actually reside in the cost-based optimizer Is synchronized with the optimizer and Oracle database enhancements Is a single advisor covering all aspects of SQL access methods Provides simple, user-friendly GUI wizards Generates scripts for implementation of recommendations
332
SQL Access Advisor: Usage Model
SQL cache Workload Hypothetical STS Filter Options SQL Access Advisor: Usage Model SQL Access Advisor takes as input a workload that can be derived from multiple sources: SQL cache, to take the current content of V$SQL Hypothetical, to generate a likely workload from your dimensional model. This option is interesting when your system is being initially designed. SQL Tuning Sets, from the workload repository SQL Access Advisor also provides powerful workload filters that you can use to target the tuning. For example, a user can specify that the advisor should look at only the 30 most resource-intensive statements in the workload, based on optimizer cost. For the given workload, the advisor then does the following: Simultaneously considers index solutions, materialized view solutions, partition solutions, or combinations of all three Considers storage for creation and maintenance costs Does not generate drop recommendations for partial workloads Optimizes materialized views for maximum query rewrite usage and fast refresh Recommends materialized view logs for fast refresh Recommends partitioning for tables, indexes, and materialized views Combines similar indexes into a single index Generates recommendations that support multiple workload queries Indexes Materialized views Materialized views log Partitioned objects
333
Possible Recommendations
Comprehensive Limited Add new (partitioned) index on table or materialized view. YES Drop an unused index. NO Modify an existing index by changing the index type. Modify an existing index by adding columns at the end. Add a new (partitioned) materialized view. Drop an unused materialized view (log). Add a new materialized view log. Modify an existing materialized view log to add new columns or clauses. Partition an existing unpartitioned table or index. Possible Recommendations SQL Access Advisor carefully considers the overall impact of recommendations and makes recommendations by using only the known workload and supplied information. Two workload analysis methods are available: Comprehensive: With this approach, SQL Access Advisor addresses all aspects of tuning partitions, materialized views, indexes, and materialized view logs. It assumes that the workload contains a complete and representative set of application SQL statements. Limited: Unlike the comprehensive workload approach, a limited workload approach assumes that the workload contains only problematic SQL statements. Thus, advice is sought for improving the performance of a portion of an application environment. When comprehensive workload analysis is chosen, SQL Access Advisor forms a better set of global tuning adjustments, but the effect may be a longer analysis time. As shown in the table, the chosen workload approach determines the type of recommendations made by the advisor. Note: Partition recommendations can work on only those tables that have at least 10,000 rows, and workloads that have some predicates and joins on columns of NUMBER or DATE type. Partitioning advices can be generated only on these types of columns. In addition, partitioning advices can be generated only for single-column interval and hash partitions. Interval partitioning recommendations can be output as range syntax but interval is the default. Hash partitioning is done to leverage only partitionwise joins.
334
SQL Access Advisor Session: Initial Options
The next few slides describe a typical SQL Access Advisor session. You can access the SQL Access Advisor wizard through the Advisor Central link on the Database Home page or through individual alerts or performance pages that may include a link to facilitate solving a performance problem. The SQL Access Advisor wizard consists of several steps during which you supply the SQL statements to tune and the types of access methods you want to use. Use the SQL Access Advisor: Initial Options page to select a template or task from which to populate default options before starting the wizard. You can click Continue to start the wizard or Cancel to go back to the Advisor Central page. Note: The SQL Access Advisor may be interrupted while generating recommendations, thereby allowing the results to be reviewed. For general information about using SQL Access Advisor, see the “Overview of the SQL Access Advisor” section in the “SQL Access Advisor” lesson of the Oracle Data Warehousing Guide.
335
SQL Access Advisor Session: Initial Options
SQL Access Advisor Session: Initial Options (continued) If you choose the “Inherit Options from a Task or Template” option on the Initial Options page, you are able to select an existing task, or an existing template to inherit SQL Access Advisor’s options. By default, the SQLACCESS_EMTASK template is used. You can view the various options defined by a task or a template by selecting the corresponding object and clicking View Options.
336
SQL Access Advisor: Workload Source
You can choose your workload source from three different sources: Current and Recent SQL Activity: This source corresponds to SQL statements that are still cached in your SGA. Use an existing SQL Tuning Set: You also have the possibility to create and use a SQL Tuning Set that holds your statements. Hypothetical Workload: This option provides a schema that allows the advisor to search for dimension tables and produce a workload. This is very useful to initially design your schema. Using the Filter Options section, you can further filter your workload source. Filter options are: Resource Consumption: Number of statements ordered by Optimizer Cost, Buffer Gets, CPU Time, Disk Reads, Elapsed Time, Executions. Users Tables SQL Text Module IDs Actions
337
SQL Access Advisor: Recommendation Options
Use the Recommendations Options page to choose whether to limit the SQL Access Advisor to recommendations based on a single access method. You can choose the type of structures to be recommended by the advisor. If none of the three possible ones are chosen, the advisor evaluates existing structures instead of trying to recommend new ones. You can use the Advisor Mode section to run the advisor in one of two modes. These modes affect the quality of recommendations as well as the length of time required for processing. In Comprehensive mode, the Advisor searches a large pool of candidates resulting in recommendations of the highest quality. In Limited mode, the advisor performs quickly, limiting the candidate recommendations by working on the highest-cost statements only.
338
SQL Access Advisor: Recommendation Options
SQL Access Advisor: Recommendation Options (continued) You can choose Advanced Options to show or hide options that allow you to set space restrictions, tuning options, and default storage locations. Use the Workload Categorization section to set options for workload volatility and scope. For workload volatility, you can choose to favor read-only operations or you can consider the volatility of referenced objects when forming recommendations. For workload scope, you can select Partial Workload, which does not include recommendations to drop unused access structures, or Complete Workload, which includes recommendations to drop unused access structures. Use the Space Restrictions section to specify a hard space limit, which forces the advisor to produce only recommendations with total space requirements that do not exceed the specified limit. Use the Tuning Options section to specify options that customize the recommendations made by the advisor. The “Prioritize Tuning of SQL Statements by” drop-down list allows you to prioritize by Optimizer Cost, Buffer Gets, CPU Time, Disk Reads, Elapsed Time, and Execution Count. Use the Default Storage Locations section to override the defaults defined for schema and tablespace locations. By default, indexes are placed in the schema and tablespace of the table they reference. Materialized views are placed in the schema and tablespace of the user who executed one of the queries that contributed to the materialized view recommendation. Note: Oracle highly recommends that you specify the default schema and tablespaces for materialized views.
339
SQL Access Advisor: Schedule and Review
You can then schedule and submit your new analysis by specifying various parameters to the scheduler. The possible options are shown in the screenshots in the slide.
340
SQL Access Advisor: Results
From the Advisor Central page, you can retrieve the task details for your analysis. By selecting the task name in the Results section of the Advisor Central page, you can access the Results for Task Summary page from where you can see an overview of the Access Advisor findings. The page shows you charts and statistics that provide overall workload performance and potential for improving query execution time for the recommendations. You can use the page to show statement counts and recommendation action counts.
341
SQL Access Advisor: Results
SQL Access Advisor: Results (continued) To see other aspects of the results for the Access Advisor task, choose one of the three other tabs on the page, Recommendations, SQL Statements, or Details. On the Recommendation page, you can drill down to each of the recommendations. For each of them, you can see important information in the Select Recommendations for Implementation table. You can then select one or more recommendations and schedule their implementation. If you click the ID for a particular recommendation, you are taken to the Recommendation page that displays all actions for the specified recommendation and, optionally, to modify the tablespace name of the statement. When you complete any changes, choose OK to apply the changes. From that page, you can view the full text of an action by choosing the link in the Action field for the specified action. You can view the SQL for all actions in the recommendation by clicking Show SQL.
342
SQL Access Advisor: Results
SQL Access Advisor: Recommendation Implementation Most of these recommendations can be executed on a production system by using simple SQL DDL statements. For those cases, SQL Advisor produces executable SQL statements. In some instances, for example, repartitioning existing partitioned tables or existing dependent indexes, simple SQL is not sufficient. SQL Advisor then generates a script calling external packages such as DBMS_REDEFINITION in order to enable the user to implement the recommended change. In the slide example, the SQL Access Advisor makes the recommendation to partition the SH.CUSTOMERS table on the CUST_CREDIT_LIMIT column. The recommendation uses the INTERVAL partitioning scheme, and defines the first range of values as being less than Interval partitions are partitions based on a numeric range or date-time interval. They extend range partitioning by instructing the database to create partitions of the specified interval automatically when data inserted into the table exceeds all of the range partitions.
343
SQL Access Advisor: Results
The SQL Statements page shows you a chart and a corresponding table that list SQL statements initially ordered by the largest cost improvement. The top SQL statement is improved the most by implementing its associated recommendation.
344
SQL Access Advisor: Results
SQL Access Advisor: Results (continued) The Details page shows you the workload and task options that were used when the task was created. This page also gives you all journal entries that were logged during the task execution.
345
SQL Access Advisor: PL/SQL Procedure Flow
Step 3 ADD_STS_REF DELETE_STS_REF EXECUTE_TASK INTERRUPT/CANCEL_TASK MARK_RECOMMENDATION UPDATE_REC_ATTRIBUTES GET_TASK_REPORT GET_TASK_SCRIPT Step 1 CREATE_TASK UPDATE_TASK_ATTRIBUTES DELETE_TASK QUICK_TUNE Task-dependent SQL Access Advisor task Advisor-dependent Report/Scripts SET_TASK_PARAMETER RESET_TASK SQL Access Advisor: PL/SQL Procedure Flow The graphic shows the typical operational flow of the SQL Access Advisor procedures from the DBMS_ADVISOR package. You can find a complete description of each of these procedures in the Oracle Database PL/SQL Packages and Types Reference guide. Step 1: Create and manage tasks and data. This step uses a SQL Access Advisor task. Step 2: Prepare tasks for various operations. This step uses SQL Access Advisor parameters. Step 3: Prepare and analyze data. This step uses SQL Tuning Sets and SQL Access Advisor tasks. With Oracle Database 11g R1, GET_TASK_REPORT can report back using HTML or XML in addition to just text. Note: The DBMS_ADVISOR.QUICK_TUNE procedure is a shortcut that performs all the necessary operations to analyze a single SQL statement. The operation creates a task for which all parameters are defaulted. The workload is constituted by the specified statement only. Finally, the task is executed and the results are saved in the repository. You can also instruct the procedure to implement the final recommendations. Step 2
346
SQL Access Advisor: PL/SQL Example
BEGIN dbms_advisor.create_task(dbms_advisor.sqlaccess_advisor,'MYTASK'); END; 1 BEGIN dbms_advisor.set_task_parameter('MYTASK','ANALYSIS_SCOPE','ALL'); dbms_advisor.set_task_parameter('MYTASK','MODE','COMPREHENSIVE'); END; 2 BEGIN dbms_advisor.add_sts_ref('MYTASK','SH','MYSTS'); dbms_advisor.execute_task('MYTASK'); dbms_output.put_line(dbms_advisor.get_task_script('MYTASK')); END; 3 SQL Access Advisor: PL/SQL Example Matching the order shown in the previous slide, the examples in this slide show you a possible SQL Access Advisor tuning session using PL/SQL code. The first PL/SQL block creates a new tuning task called MYTASK. This task is uses the SQL Access Advisor. The second PL/SQL block sets SQL Access Advisor parameters for MYTASK. In the example, you set ANALYSIS_SCOPE to ALL, which means you generate recommendations for indexes, materialized views, and partitions. Then, you set MODE to COMPREHENSIVE to include all SQL statements that are part of the SQL Tuning Set associated to a future task. The third PL/SQL block associates a workload to MYTASK. Here, you use an existing SQL Tuning Set called MYSTS. You can now execute the tuning task. After its execution completes, you can generate corresponding recommendation scripts as shown in the third example in the slide. Note: For a complete list of SQL Access Advisor parameters (around 50), refer to the Oracle Database PL/SQL Packages and Types Reference guide.
347
Summary In this lesson, you should have learned how to:
Implement the new partitioning methods Employ data compression Create a SQL Access Advisor analysis session using Enterprise Manager Create a SQL Access Advisor analysis session using PL/SQL Set up a SQL Access Advisor analysis to get partition recommendations
348
Practice 9: Overview This practice covers the following topics:
Using new partitioning schemes Using table compression Getting partitioning advices with SQL Access Advisor
349
Using RMAN Enhancements
350
Objectives After completing this lesson, you should be able to:
Describe the new and enhanced RMAN features in Oracle Database 11g Configure archivelog deletion policies Duplicate active databases by using the Oracle network (without backups) Back up large files in multiple sections Create archival backups for long-term storage Manage the recovery catalog—for example, merge multiple catalog versions Describe the use of virtual private catalogs
351
Recovery Manager (RMAN)
RMAN: New Features Improved performance by: Fast incremental backups on physical standby Improved block media recovery Target database Image copies Backup pieces Backup data Data files Change tracking file Flash Recovery Area New Features of RMAN Backup and recovery operations are a critical part of securing the availability of information, when an organization needs it, despite various levels of potential failures and errors. With Oracle Database 11g, the Recovery Manager (RMAN) enhancements provide the following benefits: Fast Incremental Backups on Physical Standby Database You can enable block change tracking on a physical standby database (use the existing ALTER DATABASE ENABLE/DISABLE BLOCK CHANGE TRACKING SQL statement). RMAN will then track changed blocks during standby managed recovery. This allows the off-loading of block tracking to the standby database and allows the same fast incremental backups to use the change tracking files that have been available on the primary. This feature enables faster incremental backups on a physical standby database than in previous releases. Improved Block Media Recovery Performance You can use the RECOVER command (formerly the BLOCKRECOVER command) to recover individual data blocks. If flashback logging is enabled and contains older, uncorrupted blocks, then RMAN can use these blocks, thereby speeding up block media recovery. Auxiliary database Recovery Manager (RMAN)
352
Optimized Backups Increased speed of compression by using the ZLIB algorithm Improved protection by enhanced block corruption detection CONFIGURE COMPRESSION ALGORITHM TO ZLIB; Optimized Backups Increased Speed of Compression When Preserving Backup Space You can use the CONFIGURE command to choose between the BZIP2 and ZLIB compression algorithms for RMAN backups. The new ZLIB backup compression algorithm can be 40% faster than the previous BZIP2 algorithm. The real-world data warehousing database from one large pharmaceutical company had a compression ratio 2.0:1 with the BZIP2 algorithm, and 1.68:1 with the ZLIB algorithm. Configure the backup compression algorithm with the following command (replace alg_name with either BZIP2 or ZLIB): CONFIGURE COMPRESSION ALGORITHM TO 'alg_name'; Note: For more details, see the Oracle Database Backup and Recovery Reference. Improved Block Corruption Detection In addition to the RMAN-detected block corruptions, Oracle Database 11g also records live block corruptions in the V$DATABASE_BLOCK_CORRUPTION view. The Oracle database automatically updates this view when block corruptions are detected or repaired. The VALIDATE command is enhanced with many new options such as VALIDATE ... BLOCK and VALIDATE DATABASE.
353
Optimized Backups Optimized undo backup for automatically reduced backup time and storage Flexibility to use VSS-enabled software Allows the database to participate in snapshots coordinated by VSS-compliant backup management tools and storage products Database automatically recovered upon snapshot restore via RMAN Optimized Backups (continued) Optimized Undo Backup Undo data that is not needed for transaction recovery (for example, for committed transactions), is not backed up. The benefit is reduced overall backup time and storage by not backing up undo that applies to committed transactions. This optimization is automatically enabled. Integration with VSS-Enabled Applications The Volume Shadow Copy Service (VSS) is an infrastructure on Windows. The Oracle VSS Writer is integrated with VSS-enabled applications. So you can use VSS-enabled software and storage systems to back up and restore an Oracle database. A key benefit is the ability to make a shadow copy of an open database. You can also use the BACKUP INCREMENTAL LEVEL FROM SCN command in RMAN to make an incremental backup of a VSS shadow copy.
354
Redundant archive log files
Optimized Backups Simplified archive log management in a multiple-component environment Increased availability by failover of backup to optional destinations Image copies Backup pieces Backup data Data files Flash Recovery Area Target database Redundant archive log files Archive log files X Optimized Backups (continued) Simplified Archive Log Management in a Multiple-Component Environment This feature simplifies archive log management when used by multiple components. It also increases availability when backing up archive logs, when an archive log in the Flash Recovery Area is missing or inaccessible. Enhanced Configuration of Deletion Policies Archived redo logs are eligible for deletion only when not needed by any required components such as Data Guard, Streams, Flashback Database, and so on. In a Data Guard environment, all standby destinations are considered (instead of just mandatory destinations), before marking archive logs to be deleted. This configuration is specified using the CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY command. When you configure an archived log deletion policy, the configuration applies to all archiving destinations, including the Flash Recovery Area. Both BACKUP ... DELETE INPUT and DELETE... ARCHIVELOG use this configuration, as does the Flash Recovery Area. When you back up the recovery area, RMAN can fail over to other archived redo log destinations if the archived redo log in the Flash Recovery Area is inaccessible or corrupted.
355
RMAN: New Features The following pages have more details about:
Faster and optimized backup through intrafile parallel backup and restore Simplified active database duplication Simplified archival backups for long-term storage Simplified information infrastructure by merging catalogs Enhanced security by restricting DBA backup catalog access to owned databases “virtual private catalog” New Features of RMAN Intrafile Parallel Backup and Restore for Very Large Files Backups of a single large data file can now use multiple parallel server processes and “channels” to efficiently distribute the workload. The use of multiple sections improves the performance of backups. Simplified Active Database Duplication You can use the “network-aware” DUPLICATE command to create a duplicate or a standby database over the network without a need for preexisting database backups. The ease-of-use is especially apparent through the Enterprise Manager GUI. Archival Backups for Long-Term Storage Long-term backups, created with the KEEP option, no longer require all archived logs to be retained, when the backup is online. Instead, archive logs needed to recover the specified data files to a consistent point in time are backed up (along with specified data files and a control file). This functionality reduces archive log backup storage needed for online, long-term KEEP backups, and simplifies the command by using a single format string for all the files needed to restore and recover the backup.
356
RMAN Notes Only New Features of RMAN (continued)
Simplified Information Infrastructure by Merging RMAN Catalogs The new IMPORT CATALOG command allows one catalog schema to be merged into another, either the whole schema or just the metadata for specific databases in the catalog. This simplifies catalog management for you by allowing separate catalog schemas, created in different versions, to be merged into a single catalog schema. Restricting DBA Backup Catalog Access to Owned Databases The owner of a recovery catalog can grant or revoke access to a subset of the catalog to database users. This subset is called a virtual private catalog. RMAN Notes Only
357
Parallel Backup and Restore for Very Large Files
Multisection backups of a single file: Are created by RMAN, with your specified size value Are processed independently (serially or in parallel) Produce multipiece backup sets Improve performance of the backup Parallel Backup and Restore for Very Large Files Oracle data files can be up to 128 TB in size. In prior versions, the smallest unit of RMAN backup was an entire file. This is not practical with such large files. In Oracle Database 11g, the workload for each file is distributed among multiple parallel server processes. RMAN can break up a single large file into sections and back up and restore these sections independently, if you specify the SECTION SIZE option. In other words, RMAN can use multiple channels per file. Each channel backs up one file section. Each file section is a contiguous range of blocks in a file. Each file section can be processed independently, either serially or in parallel. Backing up a file in separate sections can both improve the performance and allow large file backups to be restarted. A multisection backup job produces a multipiece backup set. Each piece contains one section of the file. All sections of a multisection backup, except perhaps for the last section, are of the same size. There are a maximum of 256 sections per file. Tip: You should not apply large values of parallelism to back up a large file that resides on a small number of disks. This feature is built into RMAN. No installation is required beyond the normal installation of the Oracle Database 11g. COMPATIBLE must be set to at least 11.0, because earlier releases cannot restore multisection backups. In Enterprise Manager select Availability > Backup Settings > Backup Set (tabbed page).
358
Using RMAN Multisection Backups
The BACKUP and VALIDATE DATAFILE commands option: SECTION SIZE <integer> [M | K | G] Channel 1 Section 1 Channel 2 Section 2 Channel 3 Section 3 Channel 4 Using RMAN Multisection Backups The BACKUP and VALIDATE DATAFILE commands accept a new option: SECTION SIZE <integer> [M | K | G]. Specify your planned size for each backup section. The option is both a backup-command and backup-spec level option, so that you can apply different section sizes to different files in the same backup job. Viewing metadata about your multisection backup: The V$BACKUP_SET and RC_BACKUP_SET views have a MULTI_SECTION column, which indicates whether this is a multisection backup or not. The V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE views have a SECTION_SIZE column, which specifies the number of blocks in each section of a multisection backup. Zero means a whole-file backup. Section 4 One large data file
359
Duplicating a Database
With network (no backups required) Including customized SPFILE Via Enterprise Manager or RMAN command line Destination or AUXILIARY database TCP/IP Duplicating a Database Prior to Oracle Database 11g, you could create a duplicate database with RMAN for testing or for standby. It required the source database, a copy of a backup on the destination system, and the destination database itself. Oracle Database 11g greatly simplifies this process. You can instruct the source database to perform online image copies and archived log copies directly to the auxiliary instance, by using Enterprise Manager or the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command. Preexisting backups are no longer needed. The database files are copied from the source to a destination or AUXILIARY instance via an inter-instance network connection. RMAN then uses a “memory script” (one that is contained only in memory) to complete recovery and open the database. Active source database
360
Active Database Duplication: Selecting the Source
Usage Notes for Active Database Duplication Oracle Net must be aware of the source and destination databases. The FROM ACTIVE DATABASE clause implies network action. If the source database is open, it must have archive logging enabled. If the source database is in mounted state (and not a standby), the source database must have been shut down cleanly. Availability of the source database is not affected by active database duplication. But the source database instance provides CPU cycles and network bandwidth. Enterprise Manager Interface In Enterprise Manager, select Data Movement > Clone Database.
361
Selecting the Destination
Usage Notes for Active Database Duplication Password files are copied to the destination. The destination must have the same SYS user password as the source. In other words, at the beginning of the active database duplication process, both databases (source and destination) must have password files. When creating a standby database, the password file from the primary database overwrites the current (temporary) password file on the standby database. When you use the command line and do not duplicate for a standby database, then you need to use the PASSWORD clause (with the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command).
362
Customizing Destination Options
Prior to Oracle Database 11g, the SPFILE was not copied, because it requires alterations appropriate for the destination environment. You had to copy the SPFILE into the new location, edit it, and specify it when starting the instance in NOMOUNT mode or on the RMAN command line to be used before opening the newly copied database. With Oracle Database 11g, you provide your list of parameters and desired values and the system sets them. The most obvious parameters are those whose value contains a directory specification. All parameter values that match your choice (with the exception of the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters) are placed. Note the case-sensitivity of parameters: The case must match for PARAMETER_VALUE_CONVERT. With the FILE_NAME_CONVERT parameters, pattern matching is operating system specific. This functionality is equivalent to pausing the database duplication after restoring the SPFILE and issuing ALTER SYSTEM SET commands to modify the parameter file (before the instance is mounted). The example in the slide shows how to clone a database on the same host and in the same Oracle Home, with the use of different top-level disk locations. The source directories are under u01, the destination under u02: You need to confirm your choice.
363
Choosing Database Configuration
Note how the information you enter is used for the new database.
364
Scheduling Job Execution
Following the steps of the wizard, you can now schedule the job, so that it becomes active according to your specifications.
365
Reviewing the Job Reviewing the Job
Scroll down to review more details and submit the job. Note that the ORCL database will create the DBTEST database on the same host in the /u02 directory structure.
366
Database Duplication: Job Run
The example of the Job Run page shows the following steps: 1. Source Preparation 2. Create Control File 3. Destination Directories Creation 4. Copy Initialization and Password Files * Skip Copy or Transfer Controlfile 5. Destination Preparation 6. Duplicate Database * Skip Crating Standby Controlfile * Skip Switching Clone Type 7. Recover Database 8. Add Temporary Files 9. Check Database and Run Post Cloning Scripts 10. Add EM Target 11. Cleanup Source Temporary Directory
367
The RMAN DUPLICATE Command
DUPLICATE TARGET DATABASE TO dbtest FROM ACTIVE DATABASE SPFILE PARAMETER_VALUE_CONVERT '/u01', '/u02' SET SGA_MAX_SIZE = '200M' SET SGA_TARGET = '125M' SET LOG_FILE_NAME_CONVERT = '/u01','/u02' DB_FILE_NAME_CONVERT = '/u01','/u02'; The RMAN DUPLICATE Command The example assumes that you have previously connected to both the source and the destination instances, which have a common directory structure but different top-level disks. The destination instance uses automatically configured channels. This RMAN DUPLICATE command duplicates an open database. The FROM ACTIVE DATABASE clause indicates that you are not using backups (it implies network action), and that the target is either open or mounted. The SPFILE clause indicates that the SPFILE will be restored and modified before opening the database. The repeating SET clause essentially issues an ALTER SYSTEM SET param = value SCOPE=SPFILE command. You can provide as many of these as necessary. Prerequisites The AUXILIARY instance is at the NOMOUNT state having been started with a minimal PFILE. The PFILE requires only the DB_NAME and REMOTE_LOGIN_PASSWORFILE parameters. The password file must exist and have the same SYS user password as the target. The directory structure must be in place with the proper permission. Connect to AUXILIARY by using the net service name as the SYS user.
368
Creating a Standby Database with the DUPLICATE Command
DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE SPFILE PARAMETER_VALUE_CONVERT '/u01', '/u02' SET "DB_UNIQUE_NAME"="FOO" SET SGA_MAX_SIZE = "200M" SET SGA_TARGET = "125M" SET LOG_FILE_NAME_CONVERT = '/u01','/u02' DB_FILE_NAME_CONVERT = '/u01','/u02'; Duplicating a Standby Database The example in the slide assumes that you are connected to the target and auxiliary instances and that the two environments have the same disk and directory structure. The FOR STANDBY FROM ACTIVE DATABASE clause initiates the creation of a standby database without using backups. The example uses u01 as the disk of the source and u02 as the top-level destination directory. All parameter values that match your choice (with the exception of the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters) are replaced in the SPFILE. If PARAMETER_VALUE_CONVERT sets the file name specified by a parameter, and if SET sets the file name specified by the same parameter, then the SET value overrides the PARAMETER_VALUE_CONVERT setting. If the DB_FILE_NAME_CONVERT clause is specified in the DUPLICATE command, then its file name settings override competing settings specified by SPFILE SET.
369
Creating Archival Backups with EM
If you have business requirements to keep records for a long time, you can use RMAN to create a self-contained archival backup of the database or tablespaces. RMAN does not apply the regular retention policies to this backup. Place your archival backup in a different long-term storage area, other than in the Flash Recovery Area. To keep a backup for a long time, perform the following steps in Enterprise Manager: 1. Select Availability > Schedule Backup > Schedule Customized Backup. 2. Follow the steps of the Schedule Customized Backup wizard until you are on the Settings page. 3. Click Override Current Settings > Policy. In the Override Retention Policy section, you can select to keep a backup for a specified number of days. A restore point is generated based on the backup job name. Backups created with the KEEP option include the SPFILE, control files, and archive redo log files required to restore this backup. This backup is a snapshot of the database at a point in time, and can be used to restore the database to another host.
370
Creating Archival Backups with RMAN
Specifying the KEEP clause, when the database is online includes both data file and archive log backup sets: List all restore points, known to the RMAN repository: Display a specific restore point: KEEP {FOREVER | UNTIL TIME [=] ' date_string '} NOKEEP [RESTORE POINT rsname] LIST RESTORE POINT ALL; LIST RESTORE POINT 'rsname'; Creating Archival Backups with RMAN Prior to Oracle Database 11g, if you needed to preserve an online backup for a specified amount of time, RMAN assumed you might want to perform point-in-time recovery for any time within that period and RMAN retained all the archived logs for that time period unless you specified NOLOGS. However, you may have a requirement to simply keep the backup (and what is necessary to keep it consistent and recoverable) for a specified amount of time. With Oracle Database 11g, you can use the KEEP option to generate archival database backups that satisfy business or legal requirements. The KEEP option is an attribute of the backup set (not individual of the backup piece) or copy. The KEEP option overrides any configured retention policy for this backup. You can retain archival backups, so that they are considered obsolete after a specified time (KEEP UNTIL) or never (KEEP FOREVER). The KEEP FOREVER clause requires the use of a recovery catalog. The RESTORE POINT clause creates a “consistency” point in the control file. It assigns a name to a specific SCN. The SCN is captured just after the data-file backup completes. The archival backup can be restored and recovered for this point in time, enabling the database to be opened. In contrast, the UNTIL TIME clause specifies the date until which the backup must be kept. RMAN includes the data files, archived log files (only those needed to recover an online backup), and the relevant autobackup files. All these files must go to the same media family (or group of tapes) and have the same KEEP attributes.
371
Managing Archival Database Backups
1. Archiving a database backup: 2. Changing the status of a database copy: CONNECT TARGET / CONNECT CATALOG CHANGE BACKUP TAG 'consistent_db_bkup' KEEP FOREVER; CHANGE COPY OF DATABASE CONTROLFILE NOKEEP; Managing Archival Database Backups The CHANGE command changes the exemption status of a backup or copy in relation to the configured retention policy. For example, you can specify CHANGE ... NOKEEP, to make a backup that is currently exempt from the retention policy eligible for the OBSOLETE status. The first example in the slide changes a consistent backup into an archival backup, which you plan to store offsite. Because the database is consistent and, therefore, requires no recovery, you do not need to save archived redo logs with the backup. The second example specifies that any long-term image copies of data files and control files should lose their exempt status and so become eligible to be obsolete according to the existing retention policy: Deprecated clauses: KEEP [LOGS | NOLOGS] Preferred syntax: KEEP RESTORE POINT <rsname> Note: The RESTORE POINT option is not valid with CHANGE. You cannot use CHANGE ... UNAVAILABLE or KEEP for files stored in the Flash Recovery Area.
372
Managing Recovery Catalogs
1. Create the recovery catalog. 2. Register your target databases in the recovery catalog. 3. If desired, merge recovery catalogs. 4. If needed, catalog any older backups. 5. If needed, create virtual recovery catalogs for specific users. 6. Protect the recovery catalog. Managing Recovery Catalogs The basic workflow of managing recovery catalogs is not new. However, it has been enhanced by two important features: the consolidation of RMAN repositories and virtual private catalogs, which allow a separation of responsibilities. 1. Create the recovery catalog. 2. Register your target databases in the recovery catalog. This step enables RMAN to store metadata for the target databases in the recovery catalog. 3. If desired, you can also use the IMPORT CATALOG command to merge recovery catalogs (new in Oracle Database 11g). 4. If needed, catalog any older backups, whose records are no longer stored in the target control file. 5. If needed, create virtual recovery catalogs for specific users and determine the metadata to which they are permitted access (new in Oracle Database 11g). 6. Protect the recovery catalog by including it in your backup and recovery strategy.
373
Notes Only Managing Recovery Catalogs (continued)
What You Already Know and What Is New The recovery catalog contains metadata about RMAN operations for each registered target database. The catalog includes the following types of metadata: Data file and archived redo log backup sets and backup pieces Data file copies Archived redo logs and their copies Tablespaces and data files on the target database Stored scripts, which are named user-created sequences of RMAN commands Persistent RMAN configuration settings The enrolling of a target database in a recovery catalog for RMAN use is called registration. The recommended practice is to register all your target databases in a single recovery catalog. For example, you can register the prod1, prod2, and prod3 databases in a single catalog owned by the catowner schema in the catdb database. The owner of a centralized recovery catalog, which is also called the base recovery catalog, can grant or revoke restricted access to the catalog to other database users. All metadata is stored in the base catalog schema. Each restricted user has full read-write access to his or her own metadata, which is called a virtual private catalog. The recovery catalog obtains crucial RMAN metadata from the control file of each registered target database. The resynchronization of the recovery catalog ensures that the metadata that RMAN obtains from the control files is current. You can use a stored script as an alternative to a command file for managing frequently used sequences of RMAN commands. The script is stored in the recovery catalog rather than on the file system. A local stored script is associated with the target database to which RMAN is connected when the script is created, and can be executed only when you are connected to this target database. A global stored script can be run against any database registered in the recovery catalog. You can use a recovery catalog in an environment in which you use or have used different versions of the database. As a result, your environment can have different versions of the RMAN client, recovery catalog database, recovery catalog schema, and target database. In Oracle Database 11g, you can merge one recovery catalog (or metadata for specific databases in the catalog) into another recovery catalog for ease of management. Notes Only
374
Managing Catalogs: Using EM
1 2 3 Managing Catalogs: Using EM In Enterprise Manager, select Availability > Recovery Catalog Settings and then the activity you want to perform.
375
The IMPORT CATALOG Command
Connecting to the destination recovery catalog: Importing metadata for all registered databases: Importing metadata for two registered databases: 4. Importing metadata from multiple catalogs: CONNECT CATALOG IMPORT CATALOG IMPORT CATALOG DBID= , ; IMPORT CATALOG IMPORT CATALOG IMPORT CATALOG NO UNREGISTER; The IMPORT CATALOG Command With the IMPORT CATALOG command, you can import the metadata from one recovery catalog schema into a different catalog schema. If you created catalog schemas of different versions to store metadata for multiple target databases, then this command enables you to maintain a single catalog schema for all databases. 1. RMAN must be connected to the destination recovery catalog—for example, the cat111 schema—which is the catalog into which you want to import catalog data. This is the first step in all examples given in the slide. IMPORT CATALOG <connectStringSpec> [DBID = <dbid> [, <dbid>,…]] [DB_NAME=<dbname>[, <dbname,…]] [ NO UNREGISTER ]; <connectStringSpec> is the source recovery catalog connect string. The version of the source recovery catalog schema must be equal to the current version of the RMAN executable. If needed, upgrade the source catalog to the current RMAN version. DBID: You can specify the list of database IDs whose metadata should be imported from the source catalog schema. When not specified, RMAN merges metadata for all database IDs from the source catalog schema into the destination catalog schema. RMAN issues an error if the database whose metadata is merged is already registered in the recovery catalog schema.
376
The IMPORT CATALOG Command Notes Page
The IMPORT CATALOG Command (continued) DB_NAME: You can specify the list of database names whose metadata should be imported. If the database name is ambiguous, RMAN issues an error. NO UNREGISTER: By default, the imported database IDs are unregistered from the source recovery catalog schema after a successful import. By using the NO UNREGISTER option, you can force RMAN to keep the imported database IDs in the source catalog schema. Import Examples (continued) 2. In this example, the cat102 user owns an RMAN catalog (version 10.2) in the srcdb database. You want RMAN to import all registered databases and unregister them in the source catalog. 3. The cat92 user owns an RMAN catalog (version 9.2) in the srcdb database. You want RMAN to import the databases with the DBID and , and unregister them in the source catalog. 4. The srcdb database contains three different recovery catalogs. RMAN imports metadata for all database IDs (registered in these catalogs) into the cat111 schema in the destdb database. All imported target databases are unregistered from their source catalogs except for the databases registered in the cat92 schema. Additional Usage Details Ensure that no target database is registered in both the source catalog schema and the destination catalog schema. If a target database is registered in both schemas, then unregister this database from the source catalog and retry the import. If the operation fails in the middle of the import, then the import is rolled back. There is never a state of partial import. When stored scripts in the source and destination catalog schemas have name conflicts, RMAN renames the stored script of the source catalog schema. The IMPORT CATALOG Command Notes Page
377
Creating and Using Virtual Private Catalogs
Databases registered in RMAN catalog RMAN base catalog Enhancing security by restricting access to metadata Creating and Using Virtual Private Catalogs (VPCs) This feature allows a consolidation of RMAN repositories and maintains a separation of responsibilities, which is a basic security requirement. The RMAN catalog has been enhanced to create virtual private RMAN catalogs for groups of databases and users. The catalog owner creates the base catalog and grants the RECOVERY_CATALOG_OWNER privilege to the owner of the virtual catalog. The catalog owner can either grant access to a registered database or grant the REGISTER privilege to the virtual catalog owner. The virtual catalog owner can then connect to the catalog for a particular target or register a target database. After this configuration, the VPC owner uses the virtual private catalog just like a standard base catalog. As catalog owner, you can access all the registered database information in the catalog. You can list all databases registered with the SQL*Plus command: SELECT DISTINCT db_name FROM DBINC; As virtual catalog owner, you can see only the databases to which you have been granted access. Note: If a catalog owner has not been granted SYSDBA or SYSOPER on the target database, then most RMAN operations cannot be performed. Virtual private catalogs (VPC)
378
Using RMAN Virtual Private Catalogs
1. Create an RMAN base catalog: 2. Grant RECOVERY_CATALOG_OWNER to VPC owner: 3a. Grant REGISTER to the VPC owner, or: 3b. Grant CATALOG FOR DATABASE to the VPC owner: RMAN> CONNECT CATALOG RMAN> CREATE CATALOG; SQL> CONNECT AS SYSDBA SQL> GRANT RECOVERY_CATALOG_OWNER to vpcowner RMAN> CONNECT CATALOG RMAN> GRANT REGISTER DATABASE TO vpcowner; Using RMAN Virtual Private Catalogs You create virtual private RMAN catalogs for groups of databases and users. 1. The catalog owner creates the base catalog. 2. The DBA on the catalog database creates the user that will own the virtual private catalog (VPC) and grants him or her the RECOVERY_CATALOG_OWNER privilege. 3. The base catalog owner can grant access for previously registered databases to the VPC owner or grant REGISTER to the VPC owner. The GRANT CATALOG command is: GRANT CATALOG FOR DATABASE prod1, prod2 TO vpcowner; The GRANT REGISTER command is: GRANT REGISTER DATABASE TO vpcowner; The virtual catalog owner can then connect to the catalog for a particular target or register a target database. After the VPC is configured, the VPC owner uses it just like a standard base catalog. RMAN>GRANT CATALOG FOR DATABASE db10g TO vpcowner
379
Using RMAN Virtual Private Catalogs
4a. Create a virtual catalog for 11g clients, or: 4b. Create a virtual catalog for pre-11g clients: 5. Register a new database in the catalog: 6. Use the virtual catalog: RMAN> CONNECT CATALOG RMAN> CREATE VIRTUAL CATALOG; SQL> CONNECT SQL> exec catowner.dbms_rcvcat.create_virtual_catalog; RMAN> CONNECT TARGET / CATALOG RMAN> REGISTER DATABASE; Using RMAN Virtual Private Catalogs (continued) 4. Create a virtual private catalog. a. If the target database is an Oracle Database 11g database and the RMAN client is an 11g client, you can use the RMAN command: CREATE VIRTUAL CATALOG; b. If the target database is Oracle Database 10g Release 2 or earlier (using a compatible client), you must execute the supplied procedure from SQL*Plus: base_catalog_owner.dbms_rcvcat.create_virtual_catalog; 5. Connect to the catalog using the VPC owner login, and use it as a normal catalog. 6. The virtual catalog owner can see only those databases that have been granted. For most RMAN operations, you additionally need the SYSDBA or SYSOPER privileges on the target database. RMAN> CONNECT TARGET / CATALOG RMAN> BACKUP DATABASE;
380
Summary In this lesson, you should have learned how to:
Describe the new and enhanced RMAN features in Oracle Database 11g Configure archivelog deletion policies Duplicate active databases by using the Oracle network (without backups) Back up large files in multiple sections Create archival backups for long-term storage Manage recovery catalog—for example, merge multiple catalog versions Describe the use of virtual private catalogs
381
Practice 10: Overview Using RMAN Enhancements
This practice covers the following topics: Duplicating an active database Merging catalogs
382
Using Flashback and LogMiner
383
Objectives After completing this lesson, you should be able to:
Describe new features for flashback and LogMiner Use Flashback Data Archive to create, protect, and use history data Prepare your database Create, change, and drop a flashback data archive View flashback data archive metadata Use Flashback Transaction Backout Set up Flashback Transaction prerequisites Query transactions with and without dependencies Choose backout options and flash back transactions Use EM LogMiner
384
New and Enhanced Features for Flashback and LogMiner
Ease-of-use in the Oracle Database 11g: Flashback Data Archive for automatic tracking and secure storing of all transactional changes to a record during its lifetime (instead of application logic) Flashback Transaction and dependent transactions or Job Backout for increased agility to undo logical errors Browser-based Enterprise Manager (EM) LogMiner interface for integration with Flashback Transaction New and Enhanced Features for Flashback and LogMiner Organizations often have the requirement to track and store all transactional changes to a record for the duration of its lifetime. You no longer need to build this intelligence into the application. Flashback Data Archive satisfies long-retention requirements (that exceed the undo retention) in a secure way. Oracle Database 11g allows you to flash back selected transactions and all the dependent transactions. This recovery operation uses undo data to create and execute the corresponding compensating transactions that revert the affected data back to its original state. Flashback Transaction or “Job Backout” increases availability during logical recovery by easily and quickly backing out a specific transaction or a set of transactions, and their dependent transactions, with one command, while the database remains online. In prior releases, administrators were required to install and use the stand-alone Java Console for LogMiner. With the Enterprise Manager interface, administrators have one installation task less and an integrated interface with Flashback Transaction. These enhancements increase ease-of-use and time savings because they provide a task-based, intuitive approach (via the EM graphical user interface), or reduce the complexity of applications.
385
Flashback Data Archive Overview: “Oracle Total Recall”
Transparently tracks historical changes to all Oracle data in a highly secure and efficient manner “Secure” No possibility to modify historical data Retained according to your specifications Automatically purged based on your retention policy “Efficient” Special kernel optimizations to minimize performance overhead of capturing historical data Stored in compressed form in tablespaces to minimize storage requirements Completely transparent to applications Easy to set up Flashback Data Archive: Overview A new database object, a flashback data archive is a logical container for storing historical information. It is stored in one or more tablespaces and tracks the history for one or more tables. You specify a retention duration for each flashback data archive. You can group the historical table data by your retention requirements in a flashback data archive. Multiple tables can share the same retention and purge policies. With the “Oracle Total Recall” option, Oracle Database 11g has been specifically enhanced to track history with minimal performance impact and to store historical data in compressed form. This efficiency cannot be duplicated by your own triggers, which also cost time and effort to set up and maintain. Operations that invalidate history or prevent historical capture are not allowed, for example, dropping or truncating a table.
386
Flashback Data Archive Comparison
Flashback Database Main benefit Access to data at any point in time without changing the current data Physically moves entire database back in time Operation Online operation, tracking enabled, minimal resource usage Offline operation, requires preconfiguration and resources Granularity Table Database Access point-in-time Any number per table One per database Flashback Data Archive Comparison How the Flashback Data Archive technology compares with Flashback Database: Flashback Data Archive offers the ability to access the data as of any point in time without actually changing the current data. This is in contrast with Flashback Database, which takes the database physically back in time. Tracking has to be enabled for historical access, while Flashback Database requires preconfiguration. Flashback Database is an offline operation, which requires resources. Flashback Data Archive is an online operation (historical access seamlessly coexists with current access). Because a new background process is used, it has almost no effect on the existing processes. Flashback Data Archive is enabled at the granularity of a table, whereas Flashback Database works only at the database level. With Flashback Data Archive, you can go back to different points in time for different rows of a table or for different tables, whereas with Flashback Database, you can go back to only one point in time for a particular invocation.
387
Flashback Data Archive: Overview
For long-retention requirements exceeding undo Undo data Original data in buffer cache DML operations FBDA Example: Three flashback data archives with retention of: 1 year Flashback Data Archive: Overview A flashback data archive is a historical data store. Oracle Database 11g automatically tracks and archives the data in tables enabled for Flashback Data Archive with a new Flashback Data Archive background process, FBDA. You use this feature to satisfy long-retention requirements that exceed the undo retention. Flashback data archives ensure that flashback queries obtain SQL-level access to the versions of database objects without getting a snapshot-too-old error. A flashback data archive consists of one or more tablespaces or parts thereof. You can have multiple flashback data archives. Each is configured with a specific retention duration. Based on your retention duration requirements, you should create different flashback data archives—for example, one for all records that must be kept for one year, another for all records that must be kept for two years, and so on. FBDA asynchronously collects and writes original data to a flashback data archive. It does not include the original indexes, because your retrieval pattern of historical information might be quite different than your retrieval pattern of current information. Note: You might want to create appropriate indexes just for the duration of historical queries. Flashback data archives stored in tablespaces 2 years 5 years
388
Flashback Data Archive: Architecture
DML changes used by FBDA Old values Undo Buffer cache 1 2 FBDA 3 History or archive tables: - Compressed storage - With automatic digital shredding Flashback Data Archive: Architecture The Flashback Data Archive background process, FBDA, starts with the database. 1. FBDA operates first on the undo in the buffer cache. 2. In case the undo has already left the buffer cache, FBDA could also read the required values from the undo segments. 3. FBDA consolidates the modified rows of flashback archive–enabled tables and writes them into the appropriate history tables, which make up the flashback data archive. You can find the internally assigned names of the history tables by querying the *_FLASHBACK_ARCHIVE_TABLES view. History tables are compressed and internally partitioned. The database automatically purges all historical information on the day after the retention period expires. (It deletes the data, but does not destroy the flashback data archive.) For example, if the retention period is 10 days, then every day after the tenth day, the oldest information is deleted; thus leaving only 10 days of information in the archive. This is a way to implement digital shredding. Flashback data archives
389
Preparing Your Database
To satisfy long-retention requirements, use flashback data archives. Begin with the following steps: For your archive administrator: Create one or more tablespaces for data archives and grant QUOTA on the tablespaces. Grant the ARCHIVE ADMINISTER system privilege to create and maintain flashback archives. For archive users: Grant the FLASHBACK ARCHIVE object privilege (to enable history tracking for specific tables in the given flashback archives). Grant FLASHBACK and SELECT privileges to query specific objects. Preparing Your Database To enable Flashback Data Archive features, ensure that the following tasks are performed: Create one or more tablespaces for the data archives and grant access and the appropriate quota to your “archive administrator.” Also, grant the ARCHIVE ADMINISTER system privilege to your archive administrator, to allow execution of the following statements: CREATE FLASHBACK ARCHIVE ALTER FLASHBACK ARCHIVE DROP FLASHBACK ARCHIVE To allow a specific user the use a specific flashback data archive, grant the FLASHBACK ARCHIVE object privilege on that flashback data archive to the archive user. The archive user can then enable flashback archive on tables, by using the specific flashback data archive. Example executed as archive administrator: GRANT FLASHBACK ARCHIVE ON FLA1 TO HR; Most likely, your users will use other Flashback functionality. To allow access to specific objects during queries, grant the FLASHBACK and SELECT privileges on all objects involved in the query. If your users need access to the DBMS_FLASHBACK package, then you need to grant them the SELECT privilege for this package. Users can then use the DBMS_FLASHBACK.ENABLE and DBMS_FLASHBACK.DISABLE procedures to enable and disable the flashback data archives.
390
Preparing Your Database
Configuring undo: Creating an undo tablespace (default: Automatically extensible tablespace) Enabling Automatic Undo Management (11g default) Understanding automatic tuning of undo: Fixed-size tablespace: Automatic tuning for best retention Automatically extensible undo tablespace: Automatic tuning for longest-running query Recommendation for Flashback: Fixed-size undo tablespace Preparing Your Database (continued) Oracle Database 11g uses the following defaults database initialization parameters: UNDO_MANAGEMENT='AUTO' UNDO_TABLESPACE='UNDOTBS1' UNDO_RETENTION=900 In other words, Automatic Undo Management is now enabled by default. If needed, enable Automatic Undo Management, as explained in the Oracle Database Administrator’s Guide. An automatically extensible undo tablespace is created upon database installation. For a fixed-size undo tablespace, the Oracle database automatically tunes the system to give the undo tablespace the best possible undo retention. For an automatically extensible undo tablespace (default), the Oracle database retains undo data to satisfy at a minimum, the retention periods needed by the longest-running query and the threshold of undo retention, specified by the UNDO_RETENTION parameter. Automatic tuning of undo retention generally achieves better results with a fixed-size undo tablespace. If you want to change the undo tablespace to fixed size for this or other reasons, the Undo Advisor can help you determine the proper fixed size to allocate. If you are uncertain about your space requirements and you do not have access to the Undo Advisor, follow these steps: 1. You can start with an automatically extensible undo tablespace. 2. Observe it through one business cycle (for example, this could be 1 or 2 days, or longer).
391
Preparing Your Database for Flashback Notes Page
Preparing Your Database (continued) 3. Collect undo block information with the V$UNDO_STAT view, calculate your space requirements, and use them to create an appropriately sized fixed undo tablespace. (The calculation formula is given in the Oracle Database Administrator’s Guide.) 4. You can query V$UNDOSTAT.TUNED_UNDORETENTION to determine the amount of time for which undo is retained for the current undo tablespace. Setting the UNDO_RETENTION parameter does not guarantee that unexpired undo data is not overwritten. If the system needs more space, the Oracle database can overwrite unexpired undo with more recently generated undo data. Specify the RETENTION GUARANTEE clause for the undo tablespace to ensure that unexpired undo data is not discarded. To satisfy long-retention requirements that exceed the undo retention, create a flashback data archive. Preparing Your Database for Flashback Notes Page
392
Flashback Data Archive: Workflow
Create the flashback data archive. Optionally, specify the default flashback data archive. Enable the flashback data archive. View flashback data archive data. Flashback Data Archive: Workflow The first step is to create a flashback data archive. A flashback data archive consists of one or more tablespaces. You can have multiple flashback data archives. Second, you can optionally specify a default flashback data archive for the system. A flashback data archive is configured with retention time. Data archived in the flashback data archive is retained for the retention time. Third, you can enable flashback archiving (and then disable it again) for a table. While flashback archiving is enabled for a table, some DDL statements are not allowed on that table. By default, flashback archiving is off for any table. Fourth, when you query data past your possible undo retention, your query is transparently rewritten to use historical tables in the flashback data archive.
393
Using Flashback Data Archive
Basic workflow to access historical data: 1. Create the flashback data archive: 2. Enable history tracking for a table in the FLA1 archive: 3. View the historical data: CREATE FLASHBACK ARCHIVE fla1 TABLESPACE tbs1 QUOTA 10G RETENTION 5 YEAR; ALTER TABLE inventory FLASHBACK ARCHIVE fla1; SELECT product_number, product_name, count FROM inventory AS OF TIMESTAMP TO_TIMESTAMP (' :00:00', 'YYYY-MM-DD HH24:MI:SS'); Flashback Data Archive: Scenario You create a flashback data archive with the CREATE FLASHBACK ARCHIVE statement. You can optionally specify the default flashback data archive for the system. If you omit this option, you can still make this flashback data archive the default later. You need to provide the name of the flashback data archive. You need to provide the name of the first tablespace of the flashback data archive. You can identify the maximum amount of space that the flashback data archive can use in the tablespace. The default is unlimited. Unless your space quota on the first tablespace is unlimited, you must specify this value, or else an ORA will ensue. You need to provide the retention time (number of days that flashback data archive data for the table is guaranteed to be stored). The basic workflow to create and use a flashback data archive has only three steps: 1. The archive administrator creates a flashback data archive named fla1, which uses up to 10 GB of the tbs1 tablespace, and whose data will be retained for five years. 2. In the second step, the archive user enables the Flashback Data Archive. If Automatic Undo Management is disabled, you receive the error ORA when you try to modify the table. 3. The third step shows the access of historical data with an AS OF query.
394
Configuring a Default Flashback Data Archive
Using a default flashback archive: 1. Create a default flashback data archive: 2. Enable history tracking for a table: Note: The name of the flashback data archive is not needed because the default one is used. 3. Disable history tracking: CREATE FLASHBACK ARCHIVE DEFAULT fla2 TABLESPACE tbs1 QUOTA 10G RETENTION 2 YEAR; ALTER TABLE stock_data FLASHBACK ARCHIVE fla2; Configuring a Default Flashback Data Archive In the FLASHBACK ARCHIVE clause, you can specify the flashback data archive where the historical data for the table will be stored. By default, the system has no flashback data archive. In the preceding example, the default flashback data archive is specified for the system. You can create a default flashback archive in one of two ways: 1. Specify the name of an existing flashback data archive in the SET DEFAULT clause of the ALTER FLASHBACK ARCHIVE statement. 2. Include DEFAULT in the CREATE FLASHBACK ARCHIVE statement when you create a flashback data archive. You enable and disable flashback archiving for a table with the ALTER TABLE command. You can assign the internal archive table to a specific flashback data archive by specifying the flashback data archive name. If the name is omitted, the default flashback data archive is used. Specify NO FLASHBACK ARCHIVE to disable archiving of a table. ALTER TABLE stock_data NO FLASHBACK ARCHIVE;
395
Filling the Flashback Data Archive Space
What happens when your flashback data archive gets full? 90% space usage Raising of errors: ORA "Flashback Archive \"%s\" is blocking and tracking on all tables is suspended" ORA "Flashback Archive \"%s\" is blocking and tracking on all tables is suspended" Generating alert log entry Suspending tracking Filling the Flashback Data Archive Space When you are out of space in a flashback data archive, the FBDA and also all foreground processes (that generate tracked undo) raise either an ORA or an ORA error. An alert log entry is added, stating that “Flashback archive fla1 is full, and archiving is suspended.\n.” By default, this occurs when 90% of the assigned space has been used. Examples: 55623, 00000, "Flashback Archive \"%s\" is blocking and tracking on all tables is suspended" // *Cause: Flashback archive tablespace has run out of space. // *Action: Add tablespace or increase tablespace quota for the flashback archive. // 55623, 00000, "Flashback Archive \"%s\" is blocking and tracking on all tables is suspended" // *Cause: Flashback archive tablespace quota is running out. // *Action: Add tablespace or increase tablespace quota for the flashback archive.
396
Maintaining Flashback Data Archives
Adding space: 2. Changing retention time: 3. Purging data: 4. Dropping a flashback data archive: ALTER FLASHBACK ARCHIVE fla1 ADD TABLESPACE tbs3 QUOTA 5G; ALTER FLASHBACK ARCHIVE fla1 MODIFY RETENTION 2 YEAR; ALTER FLASHBACK ARCHIVE fla1 PURGE BEFORE TIMESTAMP(SYSTIMESTAMP - INTERVAL '1' day); DROP FLASHBACK ARCHIVE fla1; Maintaining Flashback Data Archives 1. Example 1 adds up to 5 GB of the TBS3 tablespace to the FLA1 flashback data archive. (The archive administrator cannot exceed tablespace quota granted by the DBA.) 2. Example 2 changes the retention time for the FLA1 flashback data archive to two years. 3. Example 3 purges all historical data older than one day from the FLA1 flashback data archive. Normally, purging is done automatically, on the day after your retention time expires. You can also override this for ad hoc clean-up. 4. Example 4 drops the FLA1 flashback data archive and historical data, but not its tablespaces. With the ALTER FLASHBACK ARCHIVE command, you can: Change the retention time of a flashback data archive Purge some or all of its data Add, modify, and remove tablespaces Note: Removing all tablespaces of a flashback data archive causes an error.
397
Flashback Data Archive: Examples
1. To enforce digital shredding: 2. To access historical data: 3. To recover data: CREATE FLASHBACK ARCHIVE tax7_archive TABLESPACE tbs1 RETENTION 7 YEAR; SELECT symbol, stock_price FROM stock_data AS OF TIMESTAMP TO_TIMESTAMP (' :59:00', 'YYYY-MM-DD HH24:MI:SS') INSERT INTO employees SELECT * FROM employees AS OF TIMESTAMP TO_TIMESTAMP(' :30:00','YYYY-MM-DD HH24:MI:SS') WHERE name = 'JOE'; Flashback Data Archive: Examples Organizations require historical data stores for several purposes. Flashback Data Archive provides seamless access to historical data with “as of” queries. You can use Flashback Data Archive for compliance reporting, audit reports, data analysis, and decision support. You want to set up your database so that information in the TAX7_ARCHIVE is automatically deleted, the day after seven years are complete. To do this, you just specify a command as shown in example 1. To retrieve the stock price at the close of business on December 31, 2006, use a query as shown in example 2. You discover that JOE’s employee record was deleted by error, and that it still existed at 11:30 on the 12th of June You can insert it again as shown in example 3.
398
Flashback Data Archive: DDL Restrictions
Using any of the following DDL statements on a table enabled for Flashback Data Archive causes the error ORA-55610: ALTER TABLE statement that does any of the following: Drops, renames, or modifies a column Performs partition or subpartition operations Converts a LONG column to a LOB column Includes an UPGRADE TABLE clause, with or without an INCLUDING DATA clause DROP TABLE statement RENAME TABLE statement TRUNCATE TABLE statement Flashback Data Archive: DDL Restrictions For the sake of security and legal compliance, the preceding restrictions ensure that data in a flashback data archive cannot be invalidated.
399
Viewing Flashback Data Archives
Viewing the results: View Name Description *_FLASHBACK_ARCHIVE Displays information about flashback data archives *_FLASHBACK_ARCHIVE_TS Displays tablespaces of flashback data archives *_FLASHBACK_ARCHIVE_TABLES Displays information about tables that are enabled for flashback archiving Viewing Flashback Data Archives You can use the dynamic data dictionary views to view tracked tables and flashback data archive metadata. To access the USER_FLASHBACK views, you need table ownership privileges. For the others, you need SYSDBA privileges. Examples: Query the time when the flashback data archive(s) have been created: SELECT FLASHBACK_ARCHIVE_NAME, CREATE_TIME, STATUS FROM DBA_FLASHBACK_ARCHIVE; To list the tablespace(s), which are used for flashback data archives: SELECT * FROM DBA_FLASHBACK_ARCHIVE_TS; To list the archive table name for a specific table: SELECT ARCHIVE_TABLE_NAME FROM USER_FLASHBACK_ARCHIVE_TABLES WHERE TABLE_NAME = 'EMPLOYEES'; You cannot retrieve past data from a dynamic performance (V$) view. A query on such a view always returns current data. However, you can perform queries on past data in static data dictionary views, such as *_TABLES.
400
Guidelines and Usage Tips
COMMIT or ROLLBACK before querying past data Use of current session settings Obtain SCN with the DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER function. Compute a past time with: (SYSTIMESTAMP - INTERVAL '10' MINUTE) Use System Change Number (SCN) where precision is needed (time stamps have a three-second granularity). Guidelines and Usage Tips To ensure database consistency, always perform a COMMIT or ROLLBACK operation before querying past data. Remember that all flashback processing uses the current session settings, such as national language and character set, not the settings that were in effect at the time being queried. To obtain an SCN to use later with a flashback feature, you can use the DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER function. To compute or retrieve a past time to use in a query, use a function return value as a time stamp or an SCN argument. For example, add or subtract an INTERVAL value to the value of the SYSTIMESTAMP function. To query past data at a precise time, use an SCN. If you use a time stamp, the actual time queried might be up to 3 seconds earlier than the time you specify. The Oracle database uses SCNs internally and maps them to time stamps at a granularity of 3 seconds.
401
Flashback Transaction Backout
Logical recovery option to roll back a specific transaction and all its dependent transactions Using undo, redo logs, and supplemental logging Creating and executing compensating transactions You finalize changes with commit, or roll back. Faster and easier than laborious manual approach Dependent transactions include: Write-after-write (WAW) and primary key constrains, but not foreign key constraints Flashback Transaction Backout Flashback Transaction Backout is a logical recovery option to roll back a specific transaction and dependent transactions while the database remains online. A dependent transaction is related by either a write-after-write (WAW) relationship, in which a transaction modifies the same data that was changed by the target transaction, or a primary key constraint relationship, in which a transaction reinserts the same primary key value that was deleted by the target transaction. Flashback Transaction utilizes the undo and the redo generated for undo blocks to create and execute a compensating transaction for reverting the affected data back to its original state.
402
Flashback Transaction
Setting up Flashback Transaction prerequisites Stepping through a possible workflow Using the Flashback Transaction Wizard Querying transactions with and without dependencies Choosing backout options and flashing back transactions Reviewing the results Flashback Transaction You can use the Flashback Transaction functionality from within Enterprise Manager or with PL/SQL packages. DBMS_FLASHBACK.TRANSACTION_BACKOUT
403
… and database must be in ARCHIVELOG mode
Prerequisites Prerequisites In order to use this functionality, supplemental logging must be enabled and the correct privileges established. For example, the HR user in the HR schema decides to use Flashback Transaction for the REGIONS table. The SYSDBA ensures that the database is in archive log mode and performs the following setup steps in SQL*Plus: alter database add supplemental log data; alter database add supplemental log data (primary key) columns; grant execute on dbms_flashback to hr; grant select any transaction to hr; The HR user needs to either own the tables (as is the case in the preceding example) or have the SELECT, UPDATE, DELETE, and INSERT privileges, to allow execution of the compensating undo SQL code. … and database must be in ARCHIVELOG mode
404
Flashing Back a Transaction
You can flash back a transaction with Enterprise Manager or the command line. EM uses the Flashback Transaction Wizard, which calls the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE option. If the PL/SQL call finishes successfully, it means that the transaction does not have any dependencies and a single transaction is backed out successfully. Flashing Back a Transaction Security Privileges To flash back or back-out a transaction, that is, to create a compensating transaction, you must have the SELECT, FLASHBACK and DML privileges on all affected tables. Conditions of Use Transaction Backout is not support across conflicting DDL. Transaction Backout inherits data type support from LogMiner. See the Oracle Database 11g documentation for supported data types. Recommendation When you discover the need for a transaction backout, performance is better, if you start the backout operation sooner. Large redo logs and high transaction rates result in slower transaction backout operations. Provide a transaction name for the backout operation to facilitate later auditing. If you do not provide a transaction name, it will be automatically generated for you.
405
Possible Workflow Viewing data in a table
Discovering a logical problem Using Flashback Transaction Performing a query Selecting a transaction Flashing back a transaction (with no conflicts) Choosing other backout options (if conflicts exists) Reviewing Flashback Transaction results Possible Workflow Assume that several transactions occurred as indicated below: connect hr/hr INSERT INTO hr.regions VALUES (5,'Pole'); COMMIT; UPDATE hr.regions SET region_name='Poles' WHERE region_id = 5; UPDATE hr.regions SET region_name='North and South Poles' WHERE region_id = 5; INSERT INTO hr.countries VALUES ('TT','Test Country',5); connect sys/<password> as sysdba ALTER SYSTEM ARCHIVE LOG CURRENT;
406
Viewing Data Viewing Data
To view the data in a table in Enterprise Manager, select Schema > Tables. While viewing the content of the HR.REGIONS table, you discover a logical problem. Region 20 is misnamed. You decide to immediately address this issue.
407
Flashback Transaction Wizard
In Enterprise Manager, select Schema > Tables > HR.REGIONS, then select “Flashback Transaction” from the Actions drop-down list, and click Go. This invokes the Flashback Transaction Wizard for your selected table. The Flashback Transaction: Perform Query page is displayed. Select the appropriate time range and add query parameters. (The more specific you can be, the shorter is the search of the Flashback Transaction Wizard.) In Enterprise Manager, Flashback Transaction and LogMiner are seamlessly integrated (as this page demonstrates). Without Enterprise Manager, use the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure, which is described in the PL/SQL Packages and Types Reference. Essentially, you take an array of transaction IDs as the starting point of your dependency search. For example: CREATE TYPE XID_ARRAY AS VARRAY(100) OF RAW(8); CREATE OR REPLACE PROCEDURE TRANSACTION_BACKOUT( numberOfXIDs NUMBER, -- number of transactions passed as input xids XID_ARRAY, the list of transaction ids options NUMBER default NOCASCADE, back out dependent txn timeHint TIMESTAMP default MINTIME -- time hint on the txn start );
408
Flashback Transaction Wizard
Flashback Transaction Wizard (continued) The Flashback Transaction: Select Transaction page displays the transactions according to your previously entered specifications. First, display the transaction details to confirm that you are flashing back the correct transaction. Then select the offending transaction and continue with the wizard.
409
Flashback Transaction Wizard
1 2 Flashback Transaction Wizard (continued) The Flashback Transaction Wizard now generates the undo script and flashes back the transaction, but it gives you control to COMMIT this flashback. Click the Transaction ID to review its compensating SQL statements.
410
Flashback Transaction Wizard
2 1 Flashback Transaction Wizard (continued) Before you commit the transaction, you can use the Execute SQL area at the bottom of the Flashback Transaction: Review page, to view what the result of your COMMIT will be.
411
Flashback Transaction Wizard
COMMIT Finishing Up On the Flashback Transaction: Review page, click the “Show Undo SQL Script” button to view the compensating SQL commands. Click Finish to commit your compensating transaction.
412
Choosing Other Backout Options
The TRANSACTION_BACKOUT procedure checks dependencies, such as: Write-after-write (WAW) Primary and unique constraints A transaction can have a WAW dependency, which means a transaction updates or deletes a row that has been inserted or updated by a dependent transaction. This can occur, for example, in a master/detail relationship of primary (or unique) and mandatory foreign key constraints. To understand the difference between the NONCONFLICT_ONLY and the NOCASCADE_FORCE options, assume that the T1 transaction changes rows R1, R2, and R3 and the T2 transaction changes rows R1, R4, and R5. In this scenario, both transactions update row R1, so it is a “conflicting” row. The T2 transaction has a WAW dependency on the T1 transaction. With the NONCONFLICT_ONLY option, R2 and R3 are backed out, because there is no conflict and it is assumed that you know what to do with the R1 row. With the NOCASCADE_FORCE option, all three rows (R1, R2, and R3) are backed out. Note: This screenshot is not part of the workflow example, but shows additional details of a more complex situation.
413
Choosing Other Backout Options
Choosing Other Backout Options (continued) The Flashback Transaction Wizard works as follows: If the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE option fails (because there are dependent transactions), you can change the recovery options. With the NONCONFLICT_ONLY option, nonconflicting rows within a transaction are backed out, which implies that database consistency is maintained (although the transaction atomicity is broken for the sake of data repair). If you want to forcibly back out the given transactions, without paying attention to the dependent transactions, use the NOCASCADE_FORCE option. The server just executes the compensating DML commands for the given transactions in reverse order of their commit times. If no constraints break, you can proceed to commit the changes, otherwise roll back. To initiate the complete removal of the given transactions and all their dependents in a post order fashion, use the CASCADE option.
414
Final Steps Without EM After choosing your backout option, the dependency report is generated in the DBA_FLASHBACK_TXN_STATE and DBA_FLASHBACK_TXN_REPORT tables. Review the dependency report that shows all transactions which were backed out. Commit the changes to make them permanent. Roll back to discard the changes. Final Steps Without EM The DBA_FLASHBACK_TXN_STATE view contains the current state of a transaction: whether it is alive in the system or effectively backed out. This table is atomically maintained with the compensating transaction. For each compensating transaction, there could be multiple rows, where each row provides the dependency relation between the transactions that have been compensated by the compensating transaction. The DBA_FLASHBACK_TXN_REPORT view provides detailed information about all compensating transactions that have been committed in the database. Each row in this view is associated with one compensating transaction. For a detailed description of these tables, see the Oracle Database Reference.
415
Viewing Flashback Transaction Metadata
View Name Description *_FLASHBACK_TXN_REPORT Displays related XML information *_FLASHBACK_TXN_STATE Displays the transaction identifiers for backed-out transactions SQL> SELECT * FROM DBA_FLASHBACK_TXN_STATE; COMPENSATING_XID XID BACKOUT_MODE DEPENDENT_XID USER# A E A Viewing Flashback Transaction Metadata You can use the data dictionary views to view information about Flashback Transaction Backouts. Sample content of DBA_ FLASHBACK_TXN_REPORT: COMPENSATING_XID COMPENSATING_TXN_NAME COMMIT_TI XID_REPORT USER# 26-JUN-07 <?xml version="1.0" encoding="ISO "?> <COMP_XID_REPORT XID="
416
Using LogMiner Powerful audit tool for Oracle databases
Direct access to redo logs User interfaces: SQL command line Graphical user interface (GUI) Integrated with Enterprise Manager Using LogMiner What you already know: LogMiner is a powerful audit tool for Oracle databases, which allows you to easily locate changes in the database, enabling sophisticated data analyses, and providing undo capabilities to roll back logical data corruptions or user errors. LogMiner directly accesses the Oracle redo logs, which are complete records of all activities performed on the database, and the associated data dictionary. The tool offers two interfaces: SQL command line and a GUI interface. What is new: Enterprise Manager Database Control now has an interface for LogMiner. In prior releases, administrators were required to install and use the stand-alone Java Console for LogMiner. With this new interface, administrators have a task-based, intuitive approach to using LogMiner. This improves the manageability of LogMiner. In Enterprise Manager, select Availability > View and Manage Transactions. LogMiner supports the following activities: Specifying query parameters Stopping the query and showing partial results, if the query takes a long time Partial querying, then showing the estimated complete query time Saving the query result Re-mining or refining the query based on initial results Showing transaction details, dependencies, and compensating “undo” SQL script Flashing back and committing the transaction For more details see the High-Availability eStudy and documentation.
417
Summary In this lesson, you should have learned how to:
Describe new and enhanced features for Flashback and LogMiner Prepare your database for flashback Create, change, and drop a flashback data archive View flashback data archive metadata Set up Flashback Transaction prerequisites Query transactions with and without dependencies Choose backout options and flash back transactions Use EM LogMiner
418
Practice 11: Overview Using Flashback Technology
This practice covers the following topics: Using Flashback Data Archive Using Flashback Transaction Backout
419
Diagnosability Enhancements
420
Objectives After completing this lesson, you should be able to:
Set up Automatic Diagnostic Repository Use Support Workbench Run health checks Use SQL Repair Advisor
421
Oracle Database 11g R1 Fault Management
Goal: Reduce Time to Resolution Change assurance and automatic health checks Automatic Diagnostic Workflow Intelligent resolution Proactive patching Diagnostic Solution Delivery Prevention Resolution Oracle Database 11g R1 Fault Management The goals of the fault diagnosability infrastructure are the following: Detecting problems proactively Limiting damage and interruptions after a problem is detected Reducing problem diagnostic time Reducing problem resolution time Simplifying customer interaction with Oracle Support
422
Ease Diagnosis: Automatic Diagnostic Workflow
Automatic Diagnostic Repository Critical Error DBA Alert DBA Targeted health checks Assisted SR filling Auto incident creation First failure capture 1 2 DBA No Known bug? Yes EM Support Workbench: Package incident info Data Repair Ease Diagnosis: Automatic Diagnostic Workflow An always-on, in-memory tracing facility enables database components to capture diagnostic data upon first failure for critical errors. A special repository, called Automatic Diagnostic Repository, is automatically maintained to hold diagnostic information about critical error events. This information can be used to create incident packages to be sent to Oracle Support Services for investigation. Here is a possible workflow for a diagnostic session: 1. Incident causes an alert to be raised in Enterprise Manager (EM). 2. The DBA can view the alert via the EM Alert page. 3. The DBA can drill down to incident and problem details. 4. DBA or Oracle Support Services can decide or ask for that information to be packaged and sent to Oracle Support Services via MetaLink. The DBA can add files to the data to be packaged automatically. DBA 4 EM Support Workbench: Apply patch/Data repair 3
423
Automatic Diagnostic Repository
DIAGNOSTIC_DEST Support Workbench BACKGROUND_DUMP_DEST $ORACLE_BASE CORE_DUMP_DEST USER_DUMP_DEST $ORACLE_HOME/log ADR Base diag rdbms DB Name ADR Home SID metadata alert cdump incpkg incident hm trace (others) incdir_1 … incdir_n Automatic Diagnostic Repository (ADR) ADR is a file-based repository for database diagnostic data such as traces, incident dumps and packages, the alert log, Health Monitor reports, core dumps, and more. It has a unified directory structure across multiple instances and multiple products stored outside of any database. It is, therefore, available for problem diagnosis when the database is down. Beginning with Oracle Database 11g R1, the database, Automatic Storage Management (ASM), Cluster Ready Services (CRS), and other Oracle products or components store all diagnostic data in ADR. Each instance of each product stores diagnostic data underneath its own ADR home directory. For example, in a Real Application Clusters environment with shared storage and ASM, each database instance and each ASM instance have a home directory within ADR. ADR’s unified directory structure uses consistent diagnostic data formats across products and instances, and a unified set of tools enable customers and Oracle Support to correlate and analyze diagnostic data across multiple instances. Starting with Oracle Database 11g R1, the traditional …_DUMP_DEST initialization parameters are ignored. The ADR root directory is known as the ADR base. Its location is set by the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted or left null, the database sets DIAGNOSTIC_DEST upon startup as follows: If the environment variable ORACLE_BASE is set, DIAGNOSTIC_DEST is set to $ORACLE_BASE. If the environment variable ORACLE_BASE is not set, DIAGNOSTIC_DEST is set to $ORACLE_HOME/log. ADRCI V$DIAG_INFO log.xml alert_SID.log
424
Notes only slide Automatic Diagnostic Repository (ADR) (continued)
Within the ADR base, there can be multiple ADR homes, where each ADR home is the root directory for all diagnostic data for a particular instance of a particular Oracle product or component. The location of an ADR home for a database is shown in the graphic given in the preceding slide. Also, two alert files are now generated. One is textual, exactly like the alert file used with previous releases of the Oracle database and is located under the TRACE directory of each ADR home. In addition, an alert message file conforming to the XML standard is stored in the ALERT subdirectory inside the ADR home. You can view the alert log in text format (with the XML tags stripped) with Enterprise Manager and with the ADRCI utility. The graphic in the slide shows you the directory structure of an ADR home. The INCIDENT directory contains multiple subdirectories, where each subdirectory is named for a particular incident, and where each contains dumps pertaining only to that incident. The HM directory contains the checker run reports generated by the Health Monitor. There is also a METADATA directory that contains important files for the repository itself. You can compare this to a database dictionary. This dictionary can be queried using ADRCI. The ADR Command Interpreter (ADRCI) is utility that you can use to perform all of the tasks permitted by the Support Workbench, but in a command-line environment. The ADRCI utility also enables you to view the names of the trace files in ADR, and to view the alert log with XML tags stripped, with and without content filtering. In addition, you can use V$DIAG_INFO to list some important ADR locations. Notes only slide
425
ADRCI: The ADR Command-Line Tool
Allows interaction with ADR from OS prompt Can invoke IPS with command line instead of EM DBAs should use EM Support Workbench: Leverages same toolkit/libraries that ADRCI is built upon Easy to follow GUI ADRCI> show incident ADR Home = /u01/app/oracle/product/11.1.0/db_1/log/diag/rdbms/orcl/orcl: ***************************************************************************** INCIDENT_ID PROBLEM_KEY CREATE_TIME ORA-600_dbgris01:1,_addr=0xa JAN … ORA-600_dbgris01:12,_addr=0xa JAN … 2 incident info records fetched ADRCI> ADRCI: The ADR Command-Line Tool ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced in Oracle Database Release 11g. ADRCI enables you to: View diagnostic data within Automatic Diagnostic Repository (ADR). Package incident and problem information into a zip file for transmission to Oracle Support. This is done using a service called Incident Package Service (IPS). ADRCI has a rich command set, and can be used in interactive mode or within scripts. In addition, ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of SQL and PL/SQL commands. There is no need to log in to ADRCI, because the data in ADR is not intended to be secure. ADR data is secured only by operating system permissions on the ADR directories. The easiest way to package and otherwise manage diagnostic data is with the Support Workbench of Oracle Enterprise Manager. ADRCI provides a command-line alternative to most of the functionality of Support Workbench, and adds capabilities such as listing and querying trace files. The slide example shows you an ADRCI session where you are listing all open incidents stored in ADR. Note: For more information about ADRCI, refer to the Oracle Database Utilities guide.
426
V$DIAG_INFO SQL> SELECT * FROM V$DIAG_INFO; NAME VALUE
Diag Enabled TRUE ADR Base /u01/app/oracle ADR Home /u01/app/oracle/diag/rdbms/orcl/orcl Diag Trace /u01/app/oracle/diag/rdbms/orcl/orcl/trace Diag Alert /u01/app/oracle/diag/rdbms/orcl/orcl/alert Diag Incident /u01/app/oracle/diag/rdbms/orcl/orcl/incident Diag Cdump /u01/app/oracle/diag/rdbms/orcl/orcl/cdump Health Monitor /u01/app/oracle/diag/rdbms/orcl/orcl/hm Default Trace File /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_11424.trc Active Problem Count 3 Active Incident Count 8 V$DIAG_INFO The V$DIAG_INFO view lists all important ADR locations: ADR Base: Path of ADR base ADR Home: Path of ADR home for the current database instance Diag Trace: Location of the text alert log and background/foreground process trace files Diag Alert: Location of an XML version of the alert log … Default Trace File: Path to the trace file for your session. SQL Trace files are written here.
427
Location for Diagnostic Traces
Diagnostic Data Previous Location ADR Location Foreground process traces USER_DUMP_DEST $ADR_HOME/trace Background process traces BACKGROUND_DUMP_DEST Alert log data $ADR_HOME/alert&trace Core dumps CORE_DUMP_DEST $ADR_HOME/cdump Incident dumps USER|BACKGROUND_DUMP_DEST $ADR_HOME/incident/incdir_n Location for Diagnostic Traces The table shown in the slide describes the different classes of trace data and dumps that reside both in Oracle Database 10g and in Oracle Database 11g. With Oracle Database 11g, there is no distinction between foreground and background trace files. Both types of files go into the $ADR_HOME/trace directory. All nonincident traces are stored inside the TRACE subdirectory. This is the main difference compared with previous releases where critical error information is dumped into the corresponding process trace files instead of incident dumps. Incident dumps are placed in files separated from the normal process trace files starting with Oracle Database 11g. Note: The main difference between a trace and a dump is that a trace is more of a continuous output such as when SQL tracing is turned on, and a dump is a one-time output in response to an event such as an incident. Also, a core is a binary memory dump that is port specific. In the slide, $ADR_HOME is used to denote the ADR home directory. However, there is no official environment variable called ADR_HOME. ADR trace = Oracle Database 10g trace – critical error trace
428
Viewing the Alert Log Using Enterprise Manager
You can view the alert log with a text editor, with Enterprise Manager, or with the ADRCI utility. To view the alert log with Enterprise Manager: 1. Access the Database Home page in Enterprise Manager. 2. Under Related Links, click Alert Log Contents. The View Alert Log Contents page appears. 3. Select the number of entries to view, and then click Go.
429
Viewing the Alert Log Using ADRCI
adrci>>show alert –tail ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* :10: :00 ORA-1654: unable to extend index SYS.I_H_OBJ#_COL# by 128 in tablespace SYSTEM :21: :00 Thread 1 advanced to log sequence 400 Current log# 3 seq# 400 mem# 0: +DATA/orcl/onlinelog/group_ Current log# 3 seq# 400 mem# 1: +DATA/orcl/onlinelog/group_ … Thread 1 advanced to log sequence 401 Current log# 1 seq# 401 mem# 0: +DATA/orcl/onlinelog/group_ Current log# 1 seq# 401 mem# 1: +DATA/orcl/onlinelog/group_ DIA-48223: Interrupt Requested - Fetch Aborted - Return Code [1] adrci>> adrci>>SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-600%'" ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* adrci>> Viewing the Alert Log Using ADRCI You can also use ADRCI to view the content of your alert log file. Optionally, you can change the current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. Ensure that operating system environment variables such as ORACLE_HOME are set properly, and then enter the following command at the operating system command prompt: adrci. The utility starts and displays its prompt as shown in the slide. Then use the SHOW ALERT command. To limit the output, you can look at the last records using the –TAIL option. This displays the last portion of the alert log (about 20 to 30 messages), and then waits for more messages to arrive in the alert log. As each message arrives, it is appended to the display. This command enables you to perform live monitoring of the alert log. Press CTRL-C to stop waiting and return to the ADRCI prompt. You can also specify the amount of lines to be printed if you want. You can also filter the output of the SHOW ALERT command as shown in the bottom example in the slide, where you want to display only those alert log messages that contain the string ORA-600. Note: ADRCI allows you to spool the output to a file exactly like in SQL*Plus.
430
Problems and Incidents
Problem ID Critical Error Problem Problem Key Incident Status Collecting Ready Tracking Data-Purged Closed Automatically Flood control Automatic transition Incident DBA Incident ID Manually Traces ADR Auto-purge MMON Non-critical Error Problems and Incidents To facilitate diagnosis and resolution of critical errors, the fault diagnosability infrastructure introduces two concepts for the Oracle database: problems and incidents. A problem is a critical error in the database. Problems are tracked in ADR. Each problem is identified by a unique problem ID and has a problem key, which is a set of attributes that describe the problem. The problem key includes the ORA error number, error parameter values, and other information. Here is a possible list of critical errors: All internal Errors – ORA-60x errors All system access violations – (SEGV, SIGBUS) ORA-4020 (Deadlock on library object), ORA-8103 (Object no longer exists), ORA-1410 (Invalid ROWID), ORA-1578 (Data block corrupted), ORA (Node eviction), ORA-255 (Database is not mounted), ORA-376 (File cannot be read at this time), ORA-4030 (Out of process memory), ORA-4031 (Unable to allocate more bytes of shared memory), ORA-355 (The change numbers are out of order), ORA-356 (Inconsistent lengths in change description), ORA-353 (Log corruption), ORA-7445 (Operating System exception) An incident is a single occurrence of a problem. When a problem occurs multiple times, as is often the case, an incident is created for each occurrence. Incidents are tracked in ADR. Each incident is identified by a numeric incident ID, which is unique within an ADR home. Package to be sent to Oracle Support
431
Notes only page Problems and Incidents (continued)
When an incident occurs, the database makes an entry in the alert log, gathers diagnostic data about the incident (a stack trace, the process state dump, and other dumps of important data structures), tags the diagnostic data with the incident ID, and stores the data in an ADR subdirectory created for that incident. Each incident has a problem key and is mapped to a single problem. Two incidents are considered to have the same root cause if their problem keys match. Large amounts of diagnostic information can be created very quickly if a large number of sessions stumble across the same critical error. Having the diagnostic information for more than a small number of the incidents is not required. That is why ADR provides flood control so that only a certain number of incidents under the same problem can be dumped in a given time interval. Note that flood-controlled incidents still generate incidents; they only skip the dump actions. By default, only five dumps per hour for a given problem are allowed. You can view a problem as a set of incidents that are perceived to have the same symptoms. The main reason to introduce this concept is to make it easier for users to manage errors on their systems. For example, a symptom that occurs 20 times should be reported to Oracle only once. Mostly, you will manage problems instead of incidents, using IPS to package a problem to be sent to Oracle Support. Most commonly incidents are automatically created when a critical error occurred. However, you are also allowed to create an incident manually, via the GUI provided by the EM Support Workbench. Manual incident creation is mostly done when you want to report problems that are not accompanied by critical errors raised inside the Oracle code. As time goes by, more and more incidents will be accumulated in ADR. A retention policy allows you to specify how long to keep the diagnostic data. ADR incidents are controlled by two different policies: The incident metadata retention policy controls how long the metadata is kept around. This policy has a default setting of one year. The incident files and dumps retention policy controls how long generated dump files are kept around. This policy has a default setting of one month. You can change these setting by using the Incident Package Configuration link on the EM Support Workbench page. Inside the RDBMS component, MMON is responsible for purging automatically expired ADR data. Notes only page
432
Notes only page Problems and Incidents (continued)
The status of an incident reflects the state of the incident. An incident can be in any one of the following states: Collecting: The incident has been newly created and is in the process of collecting diagnostic information. In this state, the incident data can be incomplete and should not be packaged, and should be viewed with discretion. Ready: The data collection phase has completed. The incident is now ready to be used for analysis, or to be packaged to be sent to Oracle Support. Tracking: The DBA is working on the incident, and prefers the incident to be kept in the repository indefinitely. You have to manually change the incident status to this value. Closed: The incident is now in a done state. In this state, ADR can elect the incident to be purged after it passes its retention policy. Data-Purged: The associated files have been removed from the incident. In some cases, even if the incident files may still be physically around, it is not advisable for users to look at them because they can be in an inconsistent state. Note that the incident metadata itself for the incident is still valid for viewing. You can view an incident status by using either ADRCI (show incident -mode detail), or directly in Support Workbench. If an incident has been in either the Collection or the Ready state for over twice its retention length, the incident automatically moves to the Closed state. You can manually purged incident files. For simplicity, problem metadata is internally maintained by ADR. Problems are automatically created when the first incident (of the problem key) occurs. The Problem metadata is removed after its last incident is removed from the repository. Note: It is not possible to disable automatic incident creation for critical errors. Notes only page
433
Incident Packaging Service (IPS)
Uses rules to correlate all relevant dumps and traces from ADR for a given problem and allow you to package them to ship to Oracle Support Rules can involve files that were generated around the same time, and associated with the same client, same error codes, and so on. DBAs can explicitly add/edit or remove files before packaging. Access IPS through either EM or ADRCI. Incident Packaging Service With the Incident Packaging Service (IPS), you can automatically and easily gather all diagnostic data (traces, dumps, health check reports, SQL test cases, and more) pertaining to a critical error and package the data into a zip file suitable for transmission to Oracle Support. Because all diagnostic data relating to a critical error are tagged with that error’s incident number, you do not have to search through trace files, dump files, and so on to determine the files that are required for analysis; the Incident Packaging Service identifies all required files automatically and adds them to the package.
434
Incident Packages Pkg_database_ORA_600__qksdie_-_feature_QKSFM_CVM__ _COM_1.zip An incident package is a logical structure inside ADR representing one or more problems. A package is a zip file containing dump information related to an incident package. By default, only the first and last three incidents of each problem are included to an incident package. You can generate complete or incremental zip files. ADR Base diag rdbms DB Name ADR Home SID metadata alert cdump incpkg incident hm trace (others) pkg_1 … pkg_n Incident Packages To upload diagnostic data to Oracle Support Services, you first collect the data in an incident package. When you create an incident package, you select one or more problems to add to the incident package. The Support Workbench then automatically adds to the incident package the incident information, trace files, and dump files associated with the selected problems. Because a problem can have many incidents (many occurrences of the same problem), by default only the first three and last three incidents for each problem are added to the incident package. You can change this default number on the Incident Packaging Configuration page accessible from the Support Workbench page. After the incident package is created, you can add any type of external file to the incident package, remove selected files from the incident package, or edit selected files in the incident package to remove sensitive data. An incident package is a logical construct only, until you create a physical file from the incident package contents. That is, an incident package starts out as a collection of metadata in ADR. As you add and remove incident package contents, only the metadata is modified. When you are ready to upload the data to Oracle Support Services, you invoke either a Support Workbench or an ADRCI function that gathers all the files referenced by the metadata, places them into a zip file, and then uploads the zip to MetaLink.
435
EM Support Workbench: Overview
Wizard that guides you through the process of handling problems You can perform the following tasks with the Support Workbench: View details on problems and incidents. Run health checks. Generate additional diagnostic data. Run advisors to help resolve problems. Create and track service requests through MetaLink. Generate incident packages. Close problems when resolved. EM Support Workbench: Overview The Support Workbench is an Enterprise Manager wizard that helps you through the process of handling critical errors. It displays incident notifications, presents incident details, and enables you to select incidents for further processing. Further processing includes running additional health checks, invoking the IPS to package all diagnostic data about the incidents, adding SQL test cases and selected user files to the package, filing a technical assistance request (TAR) with Oracle Support, shipping the packaged incident information to Oracle Support, and tracking the TAR through its life cycle. You can perform the following tasks with the Support Workbench: View details on problems and incidents. Manually run health checks to gather additional diagnostic data for a problem. Generate additional dumps and SQL test cases to add to the diagnostic data for a problem. Run advisors to help resolve problems. Create and track a service request through MetaLink, and add the service request number to the problem data. Collect all diagnostic data relating to one or more problems into an incident package and then upload the incident package to Oracle Support Services. Close the problem when the problem is resolved.
436
Oracle Configuration Manager
Enterprise Manager Support Workbench uses Oracle Configuration Manager to upload the physical files generated by IPS to MetaLink. If Oracle Configuration Manager is not installed or properly configured, the upload may fail. In this case, a message is displayed with a path to the incident package zip file and a request that you upload the file to Oracle Support manually. You can upload manually with MetaLink. During an Oracle Database 11g installation, the Oracle Universal Installer has a special Oracle Configuration Manager Registration screen shown in the slide. On that screen, you need to select the Enable Oracle Configuration Manager check box and accept license agreement before you can enter your Customer Identification Number (CSI), your MetaLink account username, and your country code. If you do not configure Oracle Configuration Manager, you will still be able to manually upload incident packages to MetaLink. Note: For more information about Oracle Configuration Manager, see the “Oracle Configuration Manager Installation and Administration Guide,” available at the following URL:
437
EM Support Workbench Roadmap
1 View critical error alerts in Enterprise Manager. 7 Close incidents. 2 View problem details. Gather additional diagnostic information. 6 Track the SR and implement repairs. 3 4 Package and upload diagnostic data to Oracle Support. Create a service request. EM Support Workbench Roadmap The graphic gives a summary of the tasks that you complete to investigate, report, and in some cases, resolve a problem using Enterprise Manager Support Workbench: 1. Start by accessing the Database Home page in Enterprise Manager and reviewing critical error alerts. Select an alert for which to view details. 2. Examine the problem details and view a list of all incidents that were recorded for the problem. Display findings from any health checks that were automatically run. 3. Optionally, run additional health checks and invoke the SQL Test Case Builder, which gathers all required data related to a SQL problem and packages the information in a way that enables the problem to be reproduced by Oracle Support. The type of information that the SQL Test Case Builder gathers includes query being executed, table and index definitions (but no data), optimizer statistics, and initialization parameter settings. 4. Create a service request with MetaLink and optionally record the service request number with the problem information. 5. Invoke a wizard that automatically packages all gathered diagnostic data for a problem and uploads the data to Oracle Support. Optionally, edit the data to remove sensitive information before uploading. 6. Optionally, maintain an activity log for the service request in the Support Workbench. Run Oracle advisors to help repair SQL failures or corrupted data. 7. Set status for one, some, or all incidents for the problem to Closed. 5
438
View Critical Error Alerts in Enterprise Manager
You begin the process of investigating problems (critical errors) by reviewing critical error alerts on the Database Home page. To view critical error alerts, access the Database Home page in Enterprise Manager. From the Home page, you can look at the Diagnostic Summary section from where you can click the Active Incidents link if there are incidents. You can also use the Alerts section and look for critical alerts flagged as Incidents. When you click the Active Incidents link, you access the Support Workbench page on which you can retrieve details about all problems and corresponding incidents. From there, you can also retrieve all Health Monitor checker run and created packages. Note: The tasks described in this section are all Enterprise Manager based. You can also accomplish all of these tasks with the ADRCI command-line utility. See Oracle Database Utilities for more information about the ADRCI utility.
439
View Problem Details View Problem Details
On the Problems subpage on the Support Workbench page, click the ID of the problem you want to investigate. This takes you to the corresponding Problem Details page. On this page, you can see all incidents that are related to your problem. You can associate your problem with a MetaLink service request and bug number. In the Investigate and Resolve section of the page, you have a Self Service subpage that has direct links to the operation you can perform on this problem. In the same section, the Oracle Support subpage has direct links to MetaLink. The Activity Log subpage shows you the system-generated operations that have occurred on your problem so far. This subpage allows you to add your own comments while investigating your problem. From the Incidents subpage, you can click a related incident ID to get to the corresponding Incident Details page.
440
View Incident Details View Incident Details
After the Incident Details page opens, the Dump Files subpage appears and lists all corresponding dump files. You can then click the eyeglass icon for a particular dump file to visualize the file content with its various sections.
441
View Incident Details View Incident Details (continued)
On the Incident Details page, click Checker Findings to view the Checker Findings subpage. This page displays findings from any health checks that were automatically run when the critical error was detected. Most of the time, you have the option to select one or more findings, and invoke an advisor to fix the issue.
442
Create a Service Request
Before you can package and upload diagnostic information for the problem to Oracle Support, you must create a service request. To create a service request, you need to go to MetaLink first. MetaLink can be accessed directly from the Problem Details page when you click the Go to Metalink button in the Investigate and Resolve section of the page. When MetaLink opens, log in and create a service request in the usual manner. When done, you have the option to enter that service request for your problem. This is entirely optional and is for your reference only. In the Summary section, click the Edit button that is adjacent to the SR# label, and in the window that opens, enter the SR#, and then click OK.
443
Package and Upload Diagnostic Data to Oracle Support
The Support Workbench provides two methods for creating and uploading an incident package: the Quick Packaging method and the Custom Packaging method. The example in the slide shows you how to use Quick Packaging. Quick Packaging is a more automated method with a minimum of steps. You select a single problem, provide an incident package name and description, and then schedule the incident package upload, either immediately or at a specified date and time. The Support Workbench automatically places diagnostic data related to the problem into the incident package, finalizes the incident package, creates the zip file, and then uploads the file. With this method, you do not have the opportunity to add, edit, or remove incident package files or add other diagnostic data such as SQL test cases. To package and upload diagnostic data to Oracle Support: 1. On the Problem Details page, in the Investigate and Resolve section, click Quick Package. The Create New Package page of the Quick Packaging wizard appears. 2. Enter a package name and description. 3. Enter the service request number to identify your problem. 4. Click Next, and then proceed with the remaining pages of the Quick Packaging wizard. Click Submit on the Review page to upload the package.
444
Track the SR and Implement Repairs
After uploading diagnostic information to Oracle Support, you may perform various activities to track the service request and implement repairs. Among these activities are the following: Add an Oracle bug number to the problem information. To do so, on the Problem Details page, click the Edit button that is adjacent to the Bug# label. This is for your reference only. Add comments to the problem activity log. To do so, complete the following steps: 1. Access the Problem Details page for the problem. 2. Click Activity Log to display the Activity Log subpage. 3. In the Comment field, enter a comment, and then click Add Comment. Your comment is recorded in the activity log. Respond to a request by Oracle Support to provide additional diagnostics. Your Oracle Support representative may provide instructions for gathering and uploading additional diagnostics.
445
Track the SR and Implement Repairs
Track the SR and Implement Repairs (continued) On the Incident Details page, you can run an Oracle advisor to implement repairs. Access the suggested advisor in one of the following ways: In the Self-Service tab of the Investigate and Resolve section of the Problem Details page On the Checker Findings subpage of the Incident Details page as shown in the slide The advisors that help you repair critical errors are: Data Recovery Advisor: Corrupted blocks, corrupted or missing files, and other data failures SQL Repair Advisor: SQL statement failures
446
Close Incidents and Problems
When a particular incident is no longer of interest, you can close it. By default, closed incidents are not displayed on the Problem Details page. All incidents, whether closed or not, are purged after 30 days. You can disable purging for an incident on the Incident Details page. To close incidents: 1. Access the Support Workbench home page. 2. Select the desired problem, and then click View. The Problem Details page appears. 3. Select the incidents to close and then click Close. A Confirmation page appears. 4. Click Yes on the Confirmation page to close your incident.
447
Incident Packaging Configuration
As already seen, you can configure various aspects of retention rules and packaging generation. Using the Support Workbench, you can access the Incident Packaging Configuration page from the Related Links section of the Support Workbench page by clicking the Incident Packaging Configuration link. Here are the parameters that you can change: Incident Metadata Retention Period: Metadata is basically information about the data. As for incidents, it is the incident time, ID, size, problem, and so forth. Data is the actual contents of an incident, such as traces. Cutoff Age for Incident Inclusion: This value includes incidents for packaging that are in the range to now. If the cutoff date is 90, for instance, the system only includes the incidents that are within the last 90 days. Leading Incidents Count: For every problem included in a package, the system selects a certain number of incidents from the problem from the beginning (leading) and the end (trailing). For example, if the problem has 30 incidents, and the leading incident count is 5 and the trailing incident count is 4, the system includes the first 5 incidents and the last 4 incidents. Trailing Incidents Count: See above.
448
Notes Only Page Incident Packaging Configuration (continued)
Correlation Time Proximity: This parameter is the exact time interval that defines “happened at the same time.” There is a concept of correlated incidents/problems to a certain incident/problem—that is, what problems seem to have a connection with a said problem. One criterion for correlation is time correlation: find the incidents that happened at the same time as the incidents in a problem. Time Window for Package Content: Time window for content inclusion is from x hours before first included incident to x hours after last incident (where x is the number specified in that field). Note: You have access to more parameters if you are using the ADRCI interface. For a complete description of all possible configurable parameters, issue the ips show configuration command in ADRCI. Notes Only Page
449
Custom Packaging: Create New Package
Custom Packaging is a more manual method than Quick Packaging, but gives you greater control over the incident package contents. You can create a new incident package with one or more problems, or you can add one or more problems to an existing incident package. You can then perform a variety of operations on the new or updated incident package, including: Adding or removing problems or incidents Adding, editing, or removing trace files in the incident package Adding or removing external files of any type Adding other diagnostic data such as SQL test cases Manually finalizing the incident package and then viewing incident package contents to determine whether you must edit or remove sensitive data or remove files to reduce incident package size. With the Custom Packaging method, you create the zip file and request upload to Oracle Support as two separate steps. Each of these steps can be performed immediately or scheduled for a future date and time. To package and upload a problem with Custom Packaging: 1. In the Problems subpage at the bottom of the Support Workbench home page, select the first problem that you want to package, and then click Package. 2. On the “Package: Select packaging mode” subpage, select the Custom Packaging option, and then click Continue.
450
Notes only page Custom Packaging: Create New Package (continued)
3. The Custom Packaging: Select Package page appears. To create a new incident package, select the Create New Package option, enter an incident package name and description, and then click OK. To add the selected problems to an existing incident package, select the “Select from Existing Packages” option, select the incident package to update, and then click OK. In the example given in the preceding slide, you decide to create a new package. Notes only page
451
Custom Packaging: Manipulate Incident Package
On the Customize Package page, you get the confirmation that your new package has been created. This page displays the incidents that are contained in the incident package, plus a selection of packaging tasks to choose from. You run these tasks against the new incident package or the updated existing incident package. As you can see from the slide, you can exclude/include incidents or files as well as many other possible tasks.
452
Custom Packaging: Finalize Incident Package
Finalizing an incident package is used to add correlated files from other components, such as Health Monitor, to the package. Recent trace files and log files are also included in the package. You can finalize a package by clicking the Finish Contents Preparation link in the Packaging Tasks section as shown in the slide. A confirmation page is displayed that lists all files that will be part of the physical package.
453
Custom Packaging: Generate Package
After your incident package has been finalized, you can generate the package file. You need to go back to the corresponding package page and click Generate Upload File. The Generate Upload File page appears. On this page, select the Full or Incremental option to generate a full incident package zip file or an incremental incident package zip file. For a full incident package zip file, all the contents of the incident package (original contents and all correlated data) are always added to the zip file. For an incremental incident package zip file, only the diagnostic information that is new or modified since the last time that you created a zip file for the same incident package is added to the zip file. When done, select the Schedule and click Submit. If you scheduled the generation immediately, a Processing page appears until packaging is finished. This is followed by the Confirmation page, where you can click OK. Note: The Incremental option is unavailable if a physical file was never created for the incident package.
454
Custom Packaging: Upload Package
After you have generated the physical package, you can go back to the Customize Package page on which you can click the View/Send Uploaded Files link in the Packaging Tasks section. This takes you to the View/Send Upload Files page from where you can select your package, and click the “Send to Oracle” button. The “Send to Oracle” page appears. There, you can enter the service request number for your problem and choose a Schedule. You can then click Submit.
455
Viewing and Modifying Incident Packages
After a package is created, you can always modify it through customization. For example, go to the Support Workbench page and click the Packages tab. This takes you to the Packages subpage. From this page, you can select a package and delete it, or click the package link to go to the Package Details page. There, you can click Customize to go to the Customize Package page from where you can manipulate your package by adding/removing problems, incidents, or files.
456
Creating User-Reported Problems
Critical errors generated internally to the database are automatically added to Automatic Diagnostic Repository (ADR) and tracked in the Support Workbench. However, there may be a situation in which you want to manually add a problem that you noticed to the ADR so that you can put that problem through the Support Workbench workflow. An example of such a situation would be if the performance of the database or of a particular query suddenly noticeably degraded. The Support Workbench includes a mechanism for you to create and work with such a user-reported problem. To create a user-reported problem, open the Support Workbench page and click the Create User-Reported Problem link in the Related Links section. This takes you to the Create User-Reported Problem page from where you are asked to run a corresponding advisor before continuing. This is necessary only if you are not sure about your problem. However, if you already know exactly what is going on, select the issue that describes most the type of problem you are encountering and click “Continue with Creation of Problem.” By clicking this button, you basically create a pseudo-problem inside the Support Workbench. This allows you to manipulate this problem using the previously seen Support Workbench workflow for handling critical errors. So, you end up on a Problem Details page for your issue. Note that at first the problem does not have any diagnostic data associated with it. At this point, you need to create a package and upload necessary trace files by customizing that package. This has already been described previously.
457
Invoking IPS Using ADRCI
IPS SET CONFIGURATION INCIDENT PROBLEM | PROBLEMKEY IPS CREATE PACKAGE SECONDS | TIME INCIDENT Ø NEW INCIDENTS IPS ADD FILE IN FILE IPS COPY OUT FILE INCIDENT IPS REMOVE FILE IPS FINALIZE PACKAGE Invoking IPS Using ADRCI Creating a package is a two-step process: you first create the logical package, and then generate the physical package as a zip file. Both steps can be performed using ADRCI commands. To create a logical package, the IPS CREATE PACKAGE command is used. There are several variants of this command that allow you to choose the contents: IPS CREATE PACKAGE creates an empty package. IPS CREATE PACKAGE PROBLEMKEY creates a package based on problem key. IPS CREATE PACKAGE PROBLEM creates a package based on problem ID. IPS CREATE PACKAGE INCIDENT creates a package based on incident ID. IPS CREATE PACKAGE SECONDS creates a package containing all incidents generated from seconds ago until now. IPS CREATE PACKAGE TIME creates a package based on the specified time range. It is also possible to add contents to an existing package. For instance: IPS ADD INCIDENT PACKAGE adds an incident to an existing package. IPS ADD FILE PACKAGE adds a file inside ADR to an existing package. IPS GENERATE PACKAGE
458
Notes Only Slide Invoking IPS Using ADRCI (continued)
IPC COPY copies files between ADR and the external file system. It has two forms: IN FILE, to copy an external file into ADR, associating it with an existing package, and optionally an incident OUT FILE, to copy a file from ADR to a location outside ADR. IPS COPY is essentially used to COPY OUT a file, edit it, and COPY IN it back into ADR. IPS FINALIZE is used to finalize a package for delivery, which means that other components, such as the Health Monitor, are called to add their correlated files to the package. Recent trace files and log files are also included in the package. If required, this step is run automatically when a package is generated. To generate the physical file, the IPS GENERATE PACKAGE command is used. The syntax is: IPS GENERATE PACKAGE IN [COMLPETE | INCREMENTAL] It generates a physical zip file for an existing logical package. The file name contains either COM for complete or INC for incremental, followed by a sequence number that is incremented each time a zip file is generated. IPS SET CONFIGURATION is used to set IPS rules. Note: Refer to the Oracle Database Utilities guide for more information about ADRCI. Notes Only Slide
459
Health Monitor: Overview
V$HM_CHECK DB-offline Critical error Redo Check ADRCI V$HM_RUN DBMS_HM Database Cross Check EM hm (reports) Reactive ADR Manual Health Monitor EM or DBMS_HM DBA V$HM_CHECK DB-online Logical Block Check Undo Segment Check Table Row Check Data Block Check Transaction Check Table Check Health Monitor: Overview Beginning with Release 11g, the Oracle database includes a framework called Health Monitor for running diagnostic checks on various components of the database. Health Monitor checkers examine various components of the database, including files, memory, transaction integrity, metadata, and process usage. These checkers generate reports of their findings as well as recommendations for resolving problems. Health Monitor checks can be run in two ways: Reactive: The fault diagnosability infrastructure can run Health Monitor checks automatically in response to critical errors. Manual: As a DBA, you can manually run Health Monitor checks by using either the DBMS_HM PL/SQL package or the Enterprise Manager interface. In the slide, you can see some of the checks that Health Monitor can run. For a complete description of all possible checks, look at V$HM_CHECK. These health checks fall into one of two categories: DB-online: These checks can be run while the database is open (that is, in OPEN mode or MOUNT mode). DB-offline: In addition to being “runnable” while the database is open, these checks can also be run when the instance is available and the database itself is closed (that is, in NOMOUNT mode). Table-Index Row Mismatch Database Dictionary Check Table-Index Cross Check
460
Note only page Health Monitor: Overview (continued)
After a checker has run, it generates a report of its execution. This report contains information about the checker’s findings, including the priorities (low, high, or critical) of the findings, descriptions of the findings and their consequences, and basic statistics about the execution. Health Monitor generates reports in XML and stores the reports in ADR. You can view these reports by using V$HM_RUN, DBMS_HM, ADRCI, or Enterprise Manager. Note: Redo Check and Database Cross Check are DB-offline checks. All other checks are DB-online checks. There are around 25 checks you can run. Note only page
461
Running Health Checks Manually: EM Example
Enterprise Manager provides an interface for running Health Monitor checkers. You can find this interface in the Checkers tab on the Advisor Central page. The page lists each checker type, and you can run a checker by clicking it and then OK on the corresponding checker page after you have entered the parameters for the run. The slide shows how you can run the Data Block Checker manually. After a check is completed, you can view the corresponding checker run details by selecting the checker run from the Results table and clicking Details. Checker runs can be reactive or manual. On the Findings subpage you can see the various findings and corresponding recommendations extracted from V$HM_RUN, V$HM_FINDING, and V$HM_RECOMMENDATION. If you click View XML Report on the Runs subpage, you can view the run report in XML format. Viewing the XML report in Enterprise Manager generates the report for the first time if it is not yet generated in your ADR. You can then view the report using ADRCI without needing to generate it.
462
Running Health Checks Manually: PL/SQL Example
SQL> exec dbms_hm.run_check('Dictionary Integrity Check', 'DicoCheck',0,'TABLE_NAME=tab$'); SQL> set long SQL> select dbms_hm.get_run_report('DicoCheck') from dual; DBMS_HM.GET_RUN_REPORT('DICOCHECK') Basic Run Information (Run Name,Run Id,Check Name,Mode,Status) Input Paramters for the Run TABLE_NAME=tab$ CHECK_MASK=ALL Run Findings And Recommendations Finding Finding Name : Dictionary Inconsistency Finding ID : 22 Type : FAILURE Status : OPEN Priority : CRITICAL Message : SQL dictionary health check: invalid column number 8 on object TAB$ failed Message : Damaged rowid is AAAAACAABAAAS7PAAB - description: Object SCOTT.TABJFV is referenced Running Health Checks Manually: PL/SQL Example You can use the DBMS_HM.RUN_CHECK procedure for running a health check. To call RUN_CHECK, supply the name of the check found in V$HM_CHECK, the name for the run (this is just a label used to retrieve reports later), and the corresponding set of input parameters for controlling its execution. You can view these parameters by using V$HM_CHECK_PARAM. In the example in the slide, you want to run a Dictionary Integrity Check for the TAB$ table. You call this run DICOCHECK, and you do not want to set any timeout for this check. After DICOCHECK is executed, you execute the DBMS_HM.GET_RUN_REPORT function to get the report extracted from V$HM_RUN, V$HM_FINDING, and V$HM_RECOMMENDATION. The output clearly shows you that a critical error was found in TAB$. This table contains an entry for a table with an invalid number of columns. Furthermore, the report gives you the name of the damaged table in TAB$. When you call the GET_RUN_REPORT function, it generates the XML report file in the HM directory of your ADR. For this example, the file is called HMREPORT_DicoCheck.hm. Note: Refer to the Oracle Database PL/SQL Packages and Types Reference for more information about DBMS_HM.
463
Viewing HM Reports Using the ADRCI Utility
adrci> show hm_run … ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* HM RUN RECORD 1 ********************************************************** RUN_ID RUN_NAME HM_RUN_1 CHECK_NAME DB Structure Integrity Check NAME_ID MODE START_TIME :31: :00 RESUME_TIME <NULL> END_TIME :31: :00 MODIFIED_TIME :31: :00 TIMEOUT FLAGS STATUS SRC_INCIDENT_ID NUM_INCIDENTS ERR_NUMBER REPORT_FILE <NULL> adrci> create report hm_run HM_RUN_1 Adrci> show report hm_run HM_RUN_1 Viewing HM Reports Using the ADRCI Utility You can create and view Health Monitor checker reports using the ADRCI utility. To do that, ensure that operating system environment variables such as ORACLE_HOME are set properly, and then enter the following command at the operating system command prompt: adrci. The utility starts and displays its prompt as shown in the slide. Optionally, you can change the current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. You can then enter the SHOW HM_RUN command to list all the checker runs registered in ADR and visible from V$HM_RUN. Locate the checker run for which you want to create a report and note the checker run name using the corresponding RUN_NAME field. The REPORT_FILE field contains a file name if a report already exists for this checker run. Otherwise, you can generate the report using the CREATE REPORT HM_RUN command as shown in the slide. To view the report, use the SHOW REPORT HM_RUN command.
464
SQL Repair Advisor: Overview
SQL statement Generate incident in ADR automatically Statement crashes Execute Trace files DBA DBA run SQL Repair Advisor DBA gets alerted SQL Repair Advisor investigates Statement executes successfully again DBA accept SQL patch Execute SQL Repair Advisor: Overview You run the SQL Repair Advisor after a SQL statement fails with a critical error that generates a problem in ADR. The advisor analyzes the statement and in many cases recommends a patch to repair the statement. If you implement the recommendation, the applied SQL patch circumvents the failure by causing the query optimizer to choose an alternate execution plan for future executions. This is done without changing the SQL statement itself. Note: In case no workaround is found by the SQL Repair Advisor, you are still able to package the incident files and send the corresponding diagnostic data to Oracle Support. SQL patch generated SQL statement patched
465
Accessing the SQL Repair Advisor Using EM
There are basically two ways to access the SQL Repair Advisor from Enterprise Manager. The first and the easiest way is when you get alerted in the Diagnostic Summary section of the database home page. Following a SQL statement crash that generates an incident in ADR, you are automatically alerted through the Active Incidents field. You can click the corresponding link to get to the Support Workbench Problems page from where you can click the corresponding problem ID link. This takes you to the Problem Details page from where you can click the SQL Repair Advisor link in the “Investigate and Resolve” section of the page.
466
Accessing the SQL Repair Advisor Using EM
Accessing the SQL Repair Advisor Using EM (continued) If the SQL statement crash incident is no longer active, you can always go to the Advisor Central page, where you can click the SQL Advisors link and choose the “Click here to go to Support Workbench” link in the SQL Advisor section of the SQL Advisors page. This takes you directly to the Problem Details page, where you can click the SQL Repair Advisor link in the “Investigate and Resolve” section of the page. Note: To access the SQL Repair Advisor in case of nonincident SQL failures, you can go either to the SQL Details page or to SQL Worksheet.
467
Using the SQL Repair Advisor from EM
On the SQL Repair Advisor: SQL Incident Analysis page, specify a Task Name, a Task Description, and a Schedule. When done, click Submit to schedule a SQL diagnostic analysis task. If you specify Immediately, you end up on the Processing: SQL Repair Advisor Task page that shows you the various steps of the task execution.
468
Using the SQL Repair Advisor from EM
Using the SQL Repair Advisor from EM (continued) After the SQL Repair Advisor task executes, you are sent to the SQL Repair Results for that task. On this page, you can see a corresponding Recommendations section, especially if SQL Patch was generated to fix your problem. As shown in the slide, you can select the statement for which you want to apply the generated SQL Patch and click View. This takes you to the “Repair Recommendations for SQL ID” page from where you can ask the system to implement the SQL Patch by clicking Implement after selecting the corresponding Findings. You then get a confirmation for the implementation and you can execute your SQL statement again.
469
Using SQL Repair Advisor from PL/SQL: Example
declare rep_out clob; t_id varchar2(50); begin t_id := dbms_sqldiag.create_diagnosis_task( sql_text => 'delete from t t1 where t1.a = ''a'' and rowid <> (select max(rowid) from t t2 where t1.a= t2.a and t1.b = t2.b and t1.d=t2.d)', task_name => 'sqldiag_bug_ ', problem_type => DBMS_SQLDIAG.PROBLEM_TYPE_COMPILATION_ERROR); dbms_sqltune.set_tuning_task_parameter(t_id,'_SQLDIAG_FINDING_MODE', dbms_sqldiag.SQLDIAG_FINDINGS_FILTER_PLANS); dbms_sqldiag.execute_diagnosis_task (t_id); rep_out := dbms_sqldiag.report_diagnosis_task (t_id, DBMS_SQLDIAG.TYPE_TEXT); dbms_output.put_line ('Report : ' || rep_out); end; / execute dbms_sqldiag.accept_sql_patch(task_name => 'sqldiag_bug_ ', task_owner => 'SCOTT', replace => TRUE); Using the SQL Repair Advisor from PL/SQL: Example It is also possible that you invoke the SQL Repair Advisor directly from PL/SQL. After you get alerted about an incident SQL failure, you can execute a SQL Repair Advisor task by using the DBMS_SQLDIAG.CREATE_DIGNOSIS_TASK function as illustrated in the slide. You need to specify the SQL statement for which you want the analysis to be done, as well as a task name and a problem type you want to analyze (possible values are PROBLEM_TYPE_COMPILATION_ERROR and PROBLEM_TYPE_EXECUTION_ERROR). You can then give the created task parameters by using the DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER procedure. When you are ready, you can execute the task by using the DBMS_SQLDIAG.EXECUTE_DIAGNOSIS_TASK procedure. Finally, you can get the task report by using the DBMS_SQLDIAG.REPORT_DIAGNOSIS_TASK function. In the example given in the slide, it is assumed that the report asks you to implement a SQL Patch to fix the problem. You can then use the DBMS_SQLDIAG.ACCEPT_SQL_PATCH procedure to implement the SQL Patch.
470
Viewing, Disabling, or Removing a SQL Patch
After you apply a SQL patch with the SQL Repair Advisor, you may want to view it to confirm its presence, disable it, or remove it. One reason to remove a patch is if you install a later release of the Oracle database that fixes the problem that caused the failure in the nonpatched SQL statement. To view, disable/enable, or remove a SQL Patch, access the Server page in Enterprise Manager and click the SQL Plan Control link in the Query Optimizer section of the page. This takes you to the SQL Plan Control page. On this page, click the SQL Patch tab. From the resulting SQL Patch subpage, locate the desired patch by examining the associated SQL statement. Select it, and perform the corresponding task: Disable, Enable, or Delete.
471
Using the SQL Test Case Builder
The SQL Test Case Builder automates the somewhat difficult and time-consuming process of gathering as much information as possible about a SQL-related problem and the environment in which it occurred, so that the problem can be reproduced and tested by Oracle Support Services. The information gathered by the SQL Test Case Builder includes the query being executed, table and index definitions (but not the actual data), PL/SQL functions, procedures and packages, optimizer statistics, and initialization parameter settings. From the Support Workbench page, to access the SQL Test Case Builder: 1. Click the corresponding Problem ID to open the problem details page. 2. Click the Oracle Support tab. 3. Click “Generate Additional Dumps and Test Cases.” 4. On the “Additional Dumps and Test Cases” page, click the icon in the Go To Task column to run the SQL Test Case Builder against your particular Incident ID. The output of the SQL Test Case Builder is a SQL script that contains the commands required to re-create all the necessary objects and the environment. Note: You can also invoke the SQL Test Case Builder by using the DBMS_SQLDIAG. EXPORT_SQL_TESTCASE_DIR_BY_INC function. This function takes the incident ID as well as a directory object. It generates its output for the corresponding incident in the specified directory.
472
Data Recovery Advisor The Oracle database provides outstanding tools for repairing problems. Lost files, corrupt blocks, and so on Analyzing the underlying problem and choosing the right solution is often the biggest component of down time. The advisor analyzes failures based on symptoms. For example, “Open failed” because data files missing It intelligently determines repair strategies. Aggregates failures for efficient repair For example, for many bad blocks restore entire file Presents only feasible repair options Are there backups? Is there a standby database? Ranked by repair time and data loss The advisor can automatically perform repairs. Intelligent Resolution: Data Recovery Advisor Data Recovery Advisor: Enterprise Manager integrates with database health checks and RMAN to display data corruption problems, assess the extent of the problem (critical, high priority, or low priority), describe the impact of the problem, recommend repair options, conduct a feasibility check of the customer-chosen option, and automate the repair process. Note: For more information about the Data Recovery Advisor, refer to the corresponding lesson in this course.
473
Summary In this lesson, you should have learned how to:
Set up Automatic Diagnostic Repository Use the Support Workbench Run health checks Use the SQL Repair Advisor
474
Practice 12: Overview This practice covers the following topics:
Using the Health Monitor and the Support Workbench Using the SQL Repair Advisor
475
Using the Data Recovery Advisor
476
Objectives After completing this lesson, you should be able to:
Describe your options for repairing data failure Use the new RMAN data repair commands to: List failures Receive a repair advice Repair failures Perform proactive failure checks Query the Data Recovery Advisor views
477
Repairing Data Failures
Data Guard provides failover to a standby database, so that your operations are not affected by down time. Data Recovery Advisor, a new feature in Oracle Database 11g, analyzes failures based on symptoms and determines repair strategies: Aggregating multiple failures for efficient repair Presenting a single, recommended repair option Performing automatic repairs at your request The Flashback technology protects the life cycle of a row and assists in repairing logical problems. Repairing Data Failures A “data failure” is a missing, corrupted, or inconsistent data, log, control, or other file, whose content the Oracle instance cannot access. When your database has a problem, analyzing the underlying cause and choosing the correct solution is often the biggest component of down time. Oracle Database 11g offers several new and enhanced tools for analyzing and repairing database problems. Data Guard, by allowing you to fail over to a standby database (that has its own copy of the data), enables you to continue operation if the primary database gets a data failure. Then, after failing over to the standby, you can take the time to repair the failed database (old primary) without worrying about the impact on your applications. There are many enhancements to Data Guard. Data Recovery Advisor is a built-in tool that automatically diagnoses data failures and reports the appropriate repair option. If, for example, the Data Recovery Advisor discovers many bad blocks, it recommends restoring the entire file, rather than repairing individual blocks. Therefore, it assists you to perform the correct repair for a failure. You can either repair a data failure manually or request the Data Recovery Advisor to execute the repair for you. This decreases the amount of time to recover from a failure.
478
Full Notes Page Repairing Data Failures (continued)
You can use the Flashback technology to repair logical problems. Flashback Archive maintains persistent changes of table data for a specified period of time, allowing you to access the archived data. Flashback Transaction allows you to back out of a transaction and all conflicting transactions with a single click. For more details, see the lesson titled “Using Flashback and LogMiner.” What you already know: RMAN automates data file media recovery (a common form of recovery that protects against logical and physical failures) and block media recovery (that recovers individual blocks rather than a whole data file). For more details, see the lesson titled “Using RMAN Enhancements.” Automatic Storage Management (ASM) protects against storage failures. Full Notes Page
479
Data Recovery Advisor Fast detection, analysis, and repair of failures
Minimizing disruptions for users Down-time and run-time failures User interfaces: EM GUI interface (several paths) RMAN command line Supported database configurations: Single-instance Not RAC Supporting failover to standby, but not analysis and repair of standby databases Functionality of the Data Recovery Advisor The Data Recovery Advisor automatically gathers data failure information when an error is encountered. In addition, it can proactively check for failures. In this mode, it can potentially detect and analyze data failures before a database process discovers the corruption and signals an error. (Note that repairs are always under human control.) Data failures can be very serious. For example, if your current log files are missing, you cannot start your database. Some data failures (such as block corruptions in data files) are not catastrophic, in that they do not take the database down or prevent you from starting the Oracle instance. The Data Recovery Advisor handles both cases: the one when you cannot start up the database (because some required database files are missing, inconsistent, or corrupted) and the one when file corruptions are discovered during run time. The preferred way to address serious data failures is to first fail over to a standby database, if you are in a Data Guard configuration. This allows users to come back online as soon as possible. Then you need to repair the primary cause of the data failure, but fortunately, this does not impact your users.
480
Full Notes Page User Interfaces
The Data Recovery Advisor is available from Enterprise Manager (EM) Database Control and Grid Control. When failures exist, there are several ways to access the Data Recovery Advisor. The following examples all begin on the Database Instance home page: Availability tabbed page > Perform Recovery > Advise and Recover Active Incidents link > on the Support Workbench “Problems” page: Checker Findings tabbed page > Launch Recovery Advisor Database Instance Health > click the specific link, for example, ORA 1578 in the Incidents section > Support Workbench, Problems Detail page > Data Recovery Advisor Database Instance Health > Related Links section: Support Workbench > Checker Findings tabbed page: Launch Recovery Advisor Related Link: Advisor Central > Advisors tabbed page: Data Recovery Advisor Related Link: Advisor Central > Checkers tabbed page: Details > Run Detail tabbed page: Launch Recovery Advisor You can also use it via the RMAN command-line. For example: rman target / nocatalog rman> list failure all; Supported Database Configurations In the current release, the Data Recovery Advisor supports single-instance databases. Oracle Real Application Clusters (RAC) databases are not supported. The Data Recovery Advisor cannot use blocks or files transferred from a standby database to repair failures on a primary database. Also, you cannot use the Data Recovery Advisor to diagnose and repair failures on a standby database. However, the Data Recovery Advisor does support failover to a standby database as a repair option (as mentioned above). Full Notes Page
481
Data Recovery Advisor Reducing down time by eliminating confusion:
Health Monitor 1. Assess data failures. 2. List failures by severity. Data Recovery Advisor 3. Advise on repair. 4. Choose and execute repair. DBA 5. Perform proactive checks. Data Recovery Advisor The automatic diagnostic workflow in Oracle Database 11g performs workflow steps for you. With the Data Recovery Advisor, you only need to initiate an advice and a repair. 1. The Health Monitor automatically executes checks and logs failures and their symptoms as “findings” into Automatic Diagnostic Repository (ADR). For more details about the Health Monitor, see the eStudy titled Diagnosability. 2. The Data Recovery Advisor consolidates findings into failures. It lists the results of previously executed assessments with failure severity (critical or high). 3. When you ask for repair advice on a failure, the Data Recovery Advisor maps failures to automatic and manual repair options, checks basic feasibility, and presents you with the repair advice. 4. You can choose to manually execute a repair or request the Data Recovery Advisor to do it for you. 5. In addition to the automatic, primarily “reactive” checks of the Health Monitor and Data Recovery Advisor, Oracle recommends to additionally use the VALIDATE command as a “proactive” check.
482
Assessing Data Failures
1 Database Instance Health . . . 3 Problem Details 2 error link Assessing Data Failures This slide illustrates different access routes, which you can use to navigate between the Health Monitor and the Data Recovery Advisor. It also demonstrates the interaction of Health Monitor and Data Recovery Advisor.
483
Data Failures Data Failures
Data failures are detected by checks, which are diagnostic procedures that assess the health of the database or its components. Each check can diagnose one or more failures, which are mapped to a repair. Checks can be reactive or proactive. When an error occurs in the database, “reactive checks” are automatically executed. You can also initiate “proactive checks”, for example, by executing the VALIDATE DATABASE command. In Enterprise Manager, select Availability > Perform Recovery, or click the Perform Recovery button, if you find your database in a “down” or “mounted” state.
484
Data Failure: Examples
Not accessible components, for example: Missing data files at the OS level Incorrect access permissions Offline tablespace, and so on Physical corruptions, such as block checksum failures or invalid block header field values Logical corruptions, such as inconsistent dictionary, corrupt row piece, corrupt index entry, or corrupt transaction Inconsistencies, such as control file is older or newer than the data files and online redo logs I/O failures, such as a limit on the number of open files exceeded, channels inaccessible, network or I/O error Data Failure: Examples The Data Recovery Advisor can analyze failures and suggest repair options for issues, as outlined in the slide.
485
Listing Data Failures Listing Data Failures
On the Perform Recovery page, click Advise and Repair. The “View and Manage Failures” page is the home page of the Data Recovery Advisor. The example in the screenshot shows how the Data Recovery Advisor lists data failures and details. Activities that you can initiate include advising, setting priorities, and closing failures. The underlying RMAN LIST FAILURE command can also display data failures and details. Failure assessments are not initiated here; they are previously executed and stored in ADR. Failures are listed in decreasing priority order: CRITICAL, HIGH, and LOW. Failures with the same priority are listed in increasing time-stamp order.
486
Advising on Repair 1 2a 2b (1) After manual repair
(2) Automatic repair 1 2a 2b Advising on Repair On the “View and Manage Failures” page, after you click the Advise button, the Data Recovery Advisor generates a manual checklist. Two types of failures could appear: Failures that require human intervention. An example is a connectivity failure, when a disk cable is not plugged in. Failures that are repaired faster if you can undo a previous erroneous action. For example, if you renamed a data file by error, it is faster to rename it back, rather than initiate RMAN restoration from backup. You can initiate the following actions: Click Re-assess Failures after you have performed a manual repair. Failures, which are resolved, are implicitly closed; any remaining ones are displayed on the “View and Manage Failures” page. Click Continue with Advise to initiate an automated repair. When the Data Recovery Advisor generates an automated repair option, it generates a script that shows you how RMAN plans to repair the failure. Click Continue, if you want to execute the automated repair. If you do not want the Data Recovery Advisor to automatically repair the failure, then you can use this script as a starting point for your manual repair.
487
Executing Repairs . . . In less than one second Executing Repairs
In the preceding example, the Data Recovery Advisor executes a successful repair in less than one second.
488
Data Recovery Advisor RMAN Command-Line Interface
Action LIST FAILURE Lists previously executed failure assessment ADVISE FAILURE Displays recommended repair option REPAIR FAILURE Repairs and closes failures (after ADVISE in the same RMAN session) CHANGE FAILURE Changes or closes one or more failures Data Recovery Advisor: RMAN Command-Line Interface If you suspect or know that a database failure has occurred, then use the LIST FAILURE command to obtain information about these failures. You can list all or a subset of failures and restrict output in various ways. Failures are uniquely identified by failure numbers. Note that these numbers are not consecutive, so gaps between failure numbers have no significance. The ADVISE FAILURE command displays a recommended repair option for the specified failures. It prints a summary of the input failure and implicitly closes all open failures that are already fixed. The default behavior when no option is used is to advise on all the CRITICAL and HIGH priority failures that are recorded in ADR. The REPAIR FAILURE command is used after an ADVISE FAILURE command within the same RMAN session. By default, the command uses the single, recommended repair option of the last ADVISE FAILURE execution in the current session. If none exists, the REPAIR FAILURE command initiates an implicit ADVISE FAILURE command. After completing the repair, the command closes the failure. The CHANGE FAILURE command changes the failure priority or closes one or more failures. You can change a failure priority only for HIGH or LOW priorities. Open failures are closed implicitly when a failure is repaired. However, you can also explicitly close a failure.
489
Listing Data Failures The RMAN LIST FAILURE command lists previously executed failure assessment. Including newly diagnosed failures Removing closed failures (by default) Syntax: LIST FAILURE [ ALL | CRITICAL | HIGH | LOW | CLOSED | failnum[,failnum,…] ] [ EXCLUDE FAILURE failnum[,failnum,…] ] [ DETAIL ] Listing Data Failures The RMAN LIST FAILURE command lists failures. If the target instance uses a recovery catalog, it can be in STARTED mode, otherwise it must be in MOUNTED mode. The LIST FAILURE command does not initiate checks to diagnose new failures; rather, it lists the results of previously executed assessments. Repeatedly executing the LIST FAILURE command revalidates all existing failures. If the database diagnoses new ones (between command executions), they are displayed. If a user manually fixes failures, or if transient failures disappear, then the Data Recovery Advisor removes these failures from the LIST FAILURE output. The following is a description of the syntax: failnum: Number of the failure to display repair options for ALL: List failures of all priorities. CRITICAL: List failures of CRITICAL priority and OPEN status. These failures require immediate attention, because they make the whole database unavailable (for example, a missing control file). HIGH: List failures of HIGH priority and OPEN status. These failures make a database partly unavailable or unrecoverable; so they should be repaired quickly (for example, missing archived redo logs). LOW: List failures of LOW priority and OPEN status. Failures of a low priority can wait, until more important failures are fixed. CLOSED: List only closed failures.
490
Listing of Data Failures Full Notes Page
Listing Data Failures (continued) EXCLUDE FAILURE: Exclude the specified list of failure numbers from the list. DETAIL: List failures by expanding the consolidated failure. For example, if there are multiple block corruptions in a file, the DETAIL option lists each one of them. See the Oracle Database Backup and Recovery Reference for details of the command syntax. Example of Listing Data Failures orcl]$ rman Recovery Manager: Release Beta on Thu Jun 21 13:33: Copyright (c) 1982, 2007, Oracle. All rights reserved. RMAN> connect target connected to target database: ORCL (DBID= ) RMAN> RMAN> LIST FAILURE; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary HIGH OPEN JUN One or more non-system datafiles are missing RMAN> LIST FAILURE DETAIL; List of child failures for parent failure ID 142 HIGH OPEN JUN Datafile 5: '/u01/app/oracle/oradata/orcl/example01.dbf' is missing Impact: Some objects in tablespace EXAMPLE might be unavailable HIGH OPEN JUN Datafile 4: '/u01/app/oracle/oradata/orcl/users01.dbf' is missing Impact: Some objects in tablespace USERS might be unavailable Listing of Data Failures Full Notes Page
491
Advising on Repair The RMAN ADVISE FAILURE command:
Displays a summary of input failure list Includes a warning, if new failures appeared in ADR Displays a manual checklist Lists a single recommended repair option Generates a repair script (for automatic or manual repair) . . . Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_ hm RMAN> Advising on Repair The RMAN ADVISE FAILURE command displays a recommended repair option for the specified failures. If this command is executed from within Enterprise Manager, then Data Guard is presented as a repair option. (This is not the case, if the command is executed directly from the RMAN command line). The ADVISE FAILURE command prints a summary of the input failure. The command implicitly closes all open failures that are already fixed. The default behavior (when no option is used) is to advise on all the CRITICAL and HIGH priority failures that are recorded in Automatic Diagnostic Repository (ADR). If a new failure has been recorded in ADR since the last LIST FAILURE command, this command includes a WARNING before advising on all CRITICAL and HIGH failures. Two general repair options are implemented: no-data-loss and data-loss repairs. When the Data Recovery Advisor generates an automated repair option, it generates a script that shows you how RMAN plans to repair the failure. If you do not want the Data Recovery Advisor to automatically repair the failure, then you can use this script as a starting point for your manual repair. The operating system (OS) location of the script is printed at the end of the command output. You can examine this script, customize it (if needed), and also execute it manually if, for example, your audit trail requirements recommend such an action.
492
16 Advising on Repair Full Notes Page
Advising on Repair (continued) Syntax ADVISE FAILURE [ ALL | CRITICAL | HIGH | LOW | failnum[,failnum,…] ] [ EXCLUDE FAILURE failnum [,failnum,…] ] Command Line Example RMAN> ADVISE FAILURE; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary HIGH OPEN JUN One or more non-system datafiles are missing List of child failures for parent failure ID 142 HIGH OPEN JUN Datafile 5: '/u01/app/oracle/oradata/orcl/example01.dbf' is missing Impact: Some objects in tablespace EXAMPLE might be unavailable HIGH OPEN JUN Datafile 4: '/u01/app/oracle/oradata/orcl/users01.dbf' is missing Impact: Some objects in tablespace USERS might be unavailable analyzing automatic repair options; this may take some time allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=152 device type=DISK analyzing automatic repair options complete Mandatory Manual Actions ======================== no manual actions available Optional Manual Actions ======================= 1. If file /u01/app/oracle/oradata/orcl/users01.dbf was unintentionally renamed or moved, restore it 2. If file /u01/app/oracle/oradata/orcl/example01.dbf was unintentionally renamed or moved, restore it Automated Repair Options Option Repair Description Restore and recover datafile 4; Restore and recover datafile 5 Strategy: The repair includes complete media recovery with no data loss Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_ hm RMAN> 16 Advising on Repair Full Notes Page
493
Executing Repairs The RMAN REPAIR FAILURE command:
Follows the ADVISE FAILURE command Repairs the specified failure Closes the repaired failure Syntax: Example: REPAIR FAILURE [PREVIEW] [NOPROMPT] RMAN> repair failure; Executing Repairs This command should be used after an ADVISE FAILURE command in the same RMAN session. By default (with no option), the command uses the single, recommended repair option of the last ADVISE FAILURE execution in the current session. If none exists, the REPAIR FAILURE command initiates an implicit ADVISE FAILURE command. By default, you are asked to confirm the command execution, because you may be requesting substantial changes, that take time to complete. During execution of a repair, the output of the command indicates what phase of the repair is being executed. After completing the repair, the command closes the failure. You cannot run multiple concurrent repair sessions. However, concurrent REPAIR … PREVIEW sessions are allowed. PREVIEW means: Do not execute the repair(s); instead, display the previously generated RMAN script with all repair actions and comments. NOPROMPT means: Do not ask for confirmation.
494
Repair Full Notes Page Example of Repairing a Failure
RMAN> REPAIR FAILURE PREVIEW; Strategy: The repair includes complete media recovery with no data loss Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_ hm contents of repair script: # restore and recover datafile restore datafile 4; recover datafile 4; RMAN> REPAIR FAILURE; Do you really want to execute the above repair (enter YES or NO)? YES executing repair script Starting restore at 21-JUN-07 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile to /u01/app/oracle/oradata/orcl/users01.dbf channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/ORCL/backupset/2007_06_21/o1_mf_nnndf_TAG T043615_37m7gpfp_.bkp channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2007_06_21/o1_mf_nnndf_TAG T043615_37m7gpfp_.bkp tag=TAG T043615 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 Finished restore at 21-JUN-07 Starting recover at 21-JUN-07 starting media recovery archived log for thread 1 with sequence 20 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_20_37m7lhgx_.arc archived log for thread 1 with sequence 21 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_21_37m7llgp_.arc archived log for thread 1 with sequence 22 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_22_37m7logv_.arc archived log for thread 1 with sequence 23 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_23_37n046y3_.arc Repair Full Notes Page
495
Repair Full Notes Page (cont)
Example of Repairing a Failure (continued) channel ORA_DISK_1: starting archived log restore to default destination channel ORA_DISK_1: restoring archived log archived log thread=1 sequence=16 archived log thread=1 sequence=17 archived log thread=1 sequence=18 archived log thread=1 sequence=19 channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/ORCL/backupset/2007_06_21/o1_mf_annnn_TAG T043805_37m7l46t_.bkp channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2007_06_21/o1_mf_annnn_TAG T043805_37m7l46t_.bkp tag=TAG T043805 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_16_37n7ptq0_.arc thread=1 sequence=16 channel default: deleting archived log(s) archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_16_37n7ptq0_.arc RECID=20 STAMP= archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_17_37n7ptrv_.arc thread=1 sequence=17 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_17_37n7ptrv_.arc RECID=22 STAMP= archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_18_37n7ptqo_.arc thread=1 sequence=18 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_18_37n7ptqo_.arc RECID=21 STAMP= archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_19_37n7ptsh_.arc thread=1 sequence=19 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_19_37n7ptsh_.arc RECID=23 STAMP= archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_20_37m7lhgx_.arc thread=1 sequence=20 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_21_37m7llgp_.arc thread=1 sequence=21 media recovery complete, elapsed time: 00:00:01 Finished recover at 21-JUN-07 repair failure complete Do you want to open the database (enter YES or NO)? YES database opened RMAN> Repair Full Notes Page (cont)
496
Classifying (and Closing) Failures
The RMAN CHANGE FAILURE command: Changes the failure priority (except for CRITICAL) Closes one or more failures Example: RMAN> change failure 5 priority low; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary HIGH OPEN DEC one or more datafiles are missing Do you really want to change the above failures (enter YES or NO)? yes changed 1 failures to LOW priority Classifying (and Closing) Failures The CHANGE FAILURE command is used to change the failure priority or close one or more failures. Syntax CHANGE FAILURE { ALL | CRITICAL | HIGH | LOW | failnum[,failnum,…] } [ EXCLUDE FAILURE failnum[,failnum,…] ] { PRIORITY {CRITICAL | HIGH | LOW} | CLOSE } – change status of the failure(s) to closed [ NOPROMPT ] – do not ask user for a confirmation A failure priority can be changed only from HIGH to LOW and from LOW to HIGH. It is an error to change the priority level of CRITICAL. (One reason why you may want to change a failure from HIGH to LOW is to avoid seeing it on the default output list of the LIST FAILURE command. For example, if a block corruption has HIGH priority, you may want to temporarily change it to LOW if the block is in a little-used tablespace.) Open failures are closed implicitly when a failure is repaired. However, you can also explicitly close a failure. This involves a reevaluation of all other open failures, because some of them may become irrelevant as the result of the closure of the failure. By default, the command asks the user to confirm a requested change.
497
Data Recovery Advisor Views
Querying dynamic data dictionary views: V$IR_FAILURE: List of all failures, including closed ones (result of the LIST FAILURE command) V$IR_MANUAL_CHECKLIST: List of manual advice (result of the ADVISE FAILURE command) V$IR_REPAIR: List of repairs (result of the ADVISE FAILURE command) V$IR_REPAIR_SET: Cross-reference of failure and advice identifiers Data Recovery Advisor Views A usage example: Assume that you need to display all failures that were detected on June 21, 2007. SELECT * FROM v$ir_failure WHERE trunc (time_detected) = '21-JUN-2007'; (Output formatted to fit page) FAILURE_ID PARENT_ID CHILD_COUNT CLASS_NAME TIME_DETE MODIFIED DESCRIPTION IMPACTS PRIORITY STATUS PERSISTENT_DATA JUN JUN-07 One or more non-system datafiles are missing See impact for individual child failures HIGH CLOSED PERSISTENT_DATA JUN JUN-07 Datafile 4: '/u01/app/oracle/oradata/orcl/users01.dbf' is missing Some objects in tablespace USERS might be unavailable HIGH CLOSED PERSISTENT_DATA JUN JUN-07 Datafile 5: '/u01/app/oracle/oradata/orcl/example01.dbf' is missing Some objects in tablespace EXAMPLE might be unavailable HIGH CLOSED See the Oracle Database Reference for details of the dynamic data dictionary views that the Data Recovery Advisor uses.
498
Best Practice: Proactive Checks
Invoking proactive health check of the database and its components: Health Monitor or RMAN VALIDATE DATABASE command Checking for logical and physical corruption Findings logged in ADR Best Practice: Proactive Checks For very important databases, you may want to execute additional proactive checks (possibly daily during low peak interval periods). You can schedule periodic health checks through the Health Monitor or by using the RMAN VALIDATE command. In general, when a reactive check detects failure(s) in a database component, you may want to execute a more complete check of the affected component. The RMAN VALIDATE DATABASE command is used to invoke health checks for the database and its components. It extends the existing VALIDATE BACKUPSET command. Any problem detected during validation is displayed to you. Problems initiate the execution of a failure assessment. If a failure is detected, it is logged into ADR as a finding. You can use the LIST FAILURE command to view all failures recorded in the repository. The VALIDATE command supports validation of individual backup sets and data blocks. In a physical corruption, the database does not recognize the block at all. In a logical corruption, the contents of the block are logically inconsistent. By default, the VALIDATE command checks for physical corruption only. You can specify CHECK LOGICAL to check for logical corruption as well.
499
Full Notes Page Best Practice: Proactive Checks (continued)
Block corruptions can be divided into interblock corruption and intrablock corruption. In intrablock corruption, the corruption occurs within the block itself and can be either physical or logical corruption. In interblock corruption, the corruption occurs between blocks and can be only logical corruption. The VALIDATE command checks for intrablock corruptions only. Example RMAN> validate database; Starting validate at 21-DEC-06 using channel ORA_DISK_1 channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf input datafile file number=00002 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf input datafile file number=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf input datafile file number=00003 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf input datafile file number=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf channel ORA_DISK_1: validation complete, elapsed time: 00:00:15 List of Datafiles ================= File Status Marked Corrupt Empty Blocks Blocks Examined High SCN 1 OK File Name: /u01/app/oracle/oradata/orcl/system01.dbf Block Type Blocks Failing Blocks Processed Data Index Other 2 OK File Name: /u01/app/oracle/oradata/orcl/sysaux01.dbf Data Index Other Full Notes Page
500
Full Notes Page Best Practice: Proactive Checks (continued)
Example (continued) File Status Marked Corrupt Empty Blocks Blocks Examined High SCN 3 OK File Name: /u01/app/oracle/oradata/orcl/undotbs01.dbf Block Type Blocks Failing Blocks Processed Data Index Other 4 OK File Name: /u01/app/oracle/oradata/orcl/users01.dbf Data Index Other 5 OK File Name: /u01/app/oracle/oradata/orcl/example01.dbf Data Index Other channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation including current control file for validation including current SPFILE in backup set channel ORA_DISK_1: validation complete, elapsed time: 00:00:01 List of Control File and SPFILE =============================== File Type Status Blocks Failing Blocks Examined SPFILE OK Control File OK Finished validate at 21-DEC-06 RMAN> Full Notes Page
501
Setting Parameters to Detect Corruption
Prevent memory and data corruption . . . Detect I/O storage, disk corruption . . . Detect non-persistent writes on physical standby New Setting Parameters to Detect Corruption You can use the DB_ULTRA_SAFE parameter for easy manageability. It affects the default values of the following parameters: DB_BLOCK_CHECKING, which initiates checking of database blocks. This check can often prevent memory and data corruption. (Default: FALSE, recommended: FULL) DB_BLOCK_CHECKSUM, which initiates the calculation and storage of a checksum in the cache header of every data block when writing it to disk. Checksums assist in detecting corruption caused by underlying disks, storage systems, or I/O systems. (Default: TYPICAL, recommended: TYPICAL) DB_LOST_WRITE_PROTECT, which initiates checking for “lost writes.” Data block lost writes occur on a physical standby database, when the I/O subsystem signals the completion of a block write, which has not yet been completely written in persistent storage. Of course, the write operation has been completed on the primary database. (Default: TYPICAL, recommended: TYPICAL) If you set any of these parameters explicitly, then your values remain in effect. The DB_ULTRA_SAFE parameter (which is new in Oracle Database 11g) changes only the default values for these parameters. . . . Specify defaults for corruption detection EM > Server > Initialization Parameters
502
Setting Parameters to Detect Corruption
DB_ULTRA_SAFE OFF DATA_ONLY DATA_AND_INDEX DB_BLOCK_CHECKING OFF or FALSE MEDIUM FULL or TRUE DB_BLOCK_CHECKSUM TYPICAL FULL DB_LOST_WRITE_PROTECT Setting Parameters to Detect Corruption (continued) Depending on your system’s tolerance for block corruption, you can intensify the checking for block corruption. Enabling the DB_ULTRA_SAFE parameter (default: OFF) results in increased system overhead, because of these more intensive checks. The amount of overhead is related to the number of blocks changed per second; so it cannot be easily quantified. For a “high-update” application, you can expect a significant increase in CPU, likely in the ten to twenty percent range, but possibly higher. This overhead can be alleviated by allocating additional CPUs. When the DB_ULTRA_SAFE parameter is set to DATA_ONLY, then the DB_BLOCK_CHECKING parameter is set to MEDIUM. This checks that data in a block are logically self-consistent. Basic block header checks are performed after block contents change in memory (for example, after UPDATE or INSERT commands, on-disk reads, or inter-instance block transfers in Oracle RAC). This level of checks includes semantic block checking for all non-index-organized table blocks. When the DB_ULTRA_SAFE parameter is set to DATA_AND_INDEX, then the DB_BLOCK_CHECKING parameter is set to FULL. In addition to the preceding checks, semantic checks are executed for index blocks (that is, blocks of subordinate objects that can actually be dropped and reconstructed when faced with corruption). When the DB_ULTRA_SAFE parameter is set to DATA_ONLY or DATA_AND_INDEX, then the DB_BLOCK_CHECKSUM parameter is set to FULL and the DB_LOST_WRITE_PROTECT parameter is set to TYPICAL.
503
Summary In this lesson, you should have learned how to:
Describe your options for repairing data failure Use the new RMAN data repair commands to: List failures Receive a repair advice Repair failures Perform proactive failure checks Query the Data Recovery Advisor views
504
Practice 13: Overview Repairing Failures
This practice covers the following topics: Repairing a “down” database with Enterprise Manager Repairing block corruption with Enterprise Manager Repairing a “down” database with the RMAN command line
505
Security: New Features
506
Objectives After completing this lesson, you should be able to:
Configure the password file to use case-sensitive passwords Encrypt a tablespace Configure fine-grained access to network services
507
Secure Password Support
Passwords in Oracle Database 11g : Are case-sensitive Contain more characters Use more secure hash algorithm Use salt in the hash algorithm Usernames are still Oracle identifiers (up to 30 characters, non-case-sensitive). Secure Password Support You must use more secure passwords to meet the demands of compliance to various security and privacy regulations. Passwords that are very short and passwords that are formed from a limited set of characters are susceptible to brute force attacks. Longer passwords with more different characters allowed make the password much more difficult to guess or find. In Oracle Database 11g, the password is handled differently than in previous versions: Passwords are case-sensitive. Uppercase and lowercase characters are now different characters when used in a password. A password may contain multibyte characters without it being enclosed in quotation marks. A password must be enclosed in quotation marks if it contains any special characters apart from $, _, or #. Passwords are always passed through a hash algorithm, and then stored as a user credential. When the user presents a password, it is hashed and then compared to the stored credential. In Oracle Database 11g, the hash algorithm is SHA-1 of the public algorithm used in previous versions of the database. SHA-1 is a stronger algorithm using a 160-bit key. Passwords always use salt. A hash function always produces the same output, given the same input. Salt is a unique (random) value that is added to the input to ensure that the output credential is unique.
508
Automatic Secure Configuration
Default password profile Default auditing Built-in password complexity checking Automatic Secure Configuration Oracle Database 11g installs and creates the database with certain security features recommended by the Center for Internet Security (CIS) benchmark. The CIS recommended configuration is more secure than the 10gR2 default installation; yet open enough to allow the majority of applications to be successful. Many customers have adopted this benchmark already. There are some recommendations of the CIS benchmark that may be incompatible with some applications.
509
Password Configuration
By default: Default password profile is enabled Account is locked after 10 failed login attempts In upgrade: Passwords are not case-sensitive until changed Passwords become case-sensitive when the ALTER USER command is used On creation: Passwords are case-sensitive Secure Default Configuration When creating a custom database using the Database Configuration Assistant (DBCA), you can specify the Oracle Database 11g default security configuration. By default, if a user tries to connect to an Oracle instance multiple times using an incorrect password, the instance delays each login after the third try. This protection applies for attempts made from different IP addresses or multiple client connections. Later, it gradually increases the time before the user can try another password, up to a maximum of about ten seconds. The default password profile is enabled with these settings at database creation: PASSWORD_LIFE_TIME 180 PASSWORD_GRACE_TIME 7 PASSWORD_REUSE_TIME UNLIMITED PASSWORD_REUSE_MAX UNLIMITED FAILED_LOGIN_ATTEMPTS 10 PASSWORD_LOCK_TIME 1 PASSWORD_VERIFY_FUNCTION NULL When an Oracle Database 10g database is upgraded, passwords are not case-sensitive until the ALTER USER… command is used to change the password. When the database is created, the passwords will be case-sensitive by default.
510
Enable Built-in Password Complexity Checker
Execute the utlpwdmg.sql script to create the password verify function: Alter the default profile: SQL> CONNECT / as SYSDBA ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION verify_function_11g; Enable Built-in Password Complexity Checker verify_function_11g is a sample PL/SQL function that can be easily modified to enforce the password complexity policies at your site. This function does not require special characters to be embedded in the password. Both verify_function_11g and the older verify_function are included in the utlpwdmg.sql file. To enable the password complexity checking, create a verification function owned by SYS. Use one of the supplied functions or modify one of them to meet your requirements. The example shows how to use the utlpwdmg.sql script. If there is an error in the password complexity check function named in the profile or it does not exist, you cannot change passwords nor can you create users. The solution is to set the PASSWORD_VERIFY_FUNCTION to NULL in the profile, until the problem is solved. The verify_function_11g function checks that the password contains at least eight characters; contains at least one number and one alphabetic character; and differs from the previous password by at least three characters. The function also checks that the password is not a username or username appended with any number 1–100; a username reversed; a server name or server name appended with 1–100; or one of a set of well-known and common passwords such as “welcome1,” “database1,” “oracle123,” or “oracle (appended with 1–100),” and so on.
511
Managing Default Audits
Review audit logs: Default audit options cover important security privileges. Archive audit records: Export Copy to another table Remove archived audit records. Managing Default Audits Review the audit logs. By default, auditing is enabled in Oracle Database 11g for certain privileges that are very important to security. The audit trail is recorded in the database AUD$ table by default; the AUDIT_TRAIL parameter is set to DB. These audits should not have a large impact on database performance, for most sites. Oracle recommends the use of OS audit trail files. Archive audit records. To retain audit records, export them using Oracle Data Pump Export, or use the SELECT statement to capture a set of audit records into a separate table. Remove archived audit records. Remove audit records from the SYS.AUD$ table after reviewing and archiving them. Audit records take up space in the SYSTEM tablespace. If the SYSTEM tablespace cannot grow, and there is no more space for audit records, errors will be generated for each audited statement. Because CREATE SESSION is one of the audited privileges, no new sessions may be created except by a user connected as SYSDBA. Archive the audit table with the export utility, using the QUERY option to specify the WHERE clause with a range of dates or SCNs. Then delete the records from the audit table by using the same WHERE clause. When AUDIT_TRAIL=OS, separate files are created for each audit record in the directory specified by AUDIT_FILE_DEST. All files as of a certain time can be copied, and then removed. Note: The SYSTEM tablespace is created with the autoextend on option. So the SYSTEM tablespace grows as needed until there is no more space available on the disk.
512
Notes only Managing Default Audits (continued)
The following privileges are audited for all users on success and failure, and by access: CREATE EXTERNAL JOB CREATE ANY JOB GRANT ANY OBJECT PRIVILEGE EXEMPT ACCESS POLICY CREATE ANY LIBRARY GRANT ANY PRIVILEGE DROP PROFILE ALTER PROFILE DROP ANY PROCEDURE ALTER ANY PROCEDURE CREATE ANY PROCEDURE ALTER DATABASE GRANT ANY ROLE CREATE PUBLIC DATABASE LINK DROP ANY TABLE ALTER ANY TABLE CREATE ANY TABLE DROP USER ALTER USER CREATE USER CREATE SESSION AUDIT SYSTEM ALTER SYSTEM Notes only
513
Adjust Security Settings
When you create a database using the DBCA tool, you are offered a choice of security settings: Keep the enhanced 11g default security settings (recommended). These settings include enabling auditing and new default password profile. Revert to pre-11g default security settings. To disable a particular category of enhanced settings for compatibility purposes, choose from the following: Revert audit settings to pre-11g defaults Revert password profile settings to pre-11g defaults These settings can also be changed after the database is created using the DBCA. Some applications may not work properly under the 11g default security settings. Secure permissions on software are always set. It is not impacted by a user’s choice for the “Security Settings” option.
514
Setting Security Parameters
Use case-sensitive passwords: SEC_SEC_CASE_SENSITIVE_LOGON Protect against DoS attacks: SEC_PROTOCOL_ERROR_FURTHER_ACTION SEC_PROTOCOL_ERROR_TRACE_ACTION Protect against brute force attacks: SEC_MAX_FAILED_LOGIN_ATTEMPTS Setting Security Parameters A set of new parameters have been added to Oracle Database 11g to enhance the default security of the database. These parameters are systemwide and static. Use Case-Sensitive Passwords to Improve Security A new parameter, SEC_CASE_SENSITIVE_LOGON, allows you to set the case-sensitivity of user passwords. Oracle recommends that you retain the default setting of TRUE. You can specify non-case-sensitive passwords for backward compatibility by setting this parameter to FALSE: ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE Note: Disabling case-sensitivity increases vulnerability to brute force attacks. Protect Against Denial of Service (DoS) Attacks The two parameters listed in the slide specify the actions to be taken when the database receives bad packets from a client. The assumption is that the bad packets are from a possible malicious client. The SEC_PROTOCOL_ERROR_FURTHER_ACTION parameter specifies what action is to be taken with the client connection: continue, drop the connection, or delay accepting requests. The other parameter, SEC_PROTOCOL_ERROR_TRACE_ACTION, specifies a monitoring action: NONE, TRACE, LOG, or ALERT.
515
Notes only page Setting Security Parameters (continued)
Protect Against Brute Force Attacks A new initialization parameter SEC_MAX_FAILED_LOGIN_ATTEMPTS, which has a default setting of 10, causes a connection to be automatically dropped after the specified number of attempts. This parameter is enforced even when the password profile is not enabled. This parameter prevents a program from making a database connection and then attempting to authenticate by trying hundreds or thousands of passwords. Notes only page
516
Setting Database Administrator Authentication
Use password file with case-sensitive passwords. Enable strong authentication for administrator roles: Grant the administrator role in OID. Use Kerberos tickets. Use certificates with SSL. Setting Database Administrator Authentication The database administrator must always be authenticated. In Oracle Database 11g, there are new methods that make administrator authentication more secure and centralize the administration of these privileged users. Case-sensitive passwords have also been extended to remote connections for privileged users. You can override this default behavior with the following command: orapwd file=orapworcl entries=5 ignorecase=Y If your concern is that the password file might be vulnerable or that the maintenance of many password files is a burden, then strong authentication can be implemented: Grant SYSDBA, or SYSOPER enterprise role in Oracle Internet Directory (OID). Use Kerberos tickets Use certificates over SSL To use any of the strong authentication methods, the LDAP_DIRECTORY_SYSAUTH initialization parameter must be set to YES. Set this parameter to NO to disable the use of strong authentication methods. Authentication through OID or through Kerberos also can provide centralized administration or single sign-on. If the password file is configured, it is checked first. The user may also be authenticated by the local OS by being a member of the OSDBA or OSOPER groups. For more information, see the Oracle Database Advanced Security Administrator’s Guide 11g Release 1.
517
Transparent Data Encryption
New features in TDE include: Tablespace Encryption Support for LogMiner Support for Logical Standby Support for Streams Support for Asynchronous Change Data Capture Hardware-based master key protection Transparent Data Encryption Several new features enhance the capabilities of Transparent Data Encryption (TDE), and build on the same infrastructure. The changes in LogMiner to support TDE provide the infrastructure for change capture engines used for Logical Standby, Streams, and Asynchronous Change Data Capture. For LogMiner to support TDE, it must be able to access the encryption wallet. To access the wallet, the instance must be mounted and the wallet open. LogMiner does not support Hardware Security Module (HSM) or user-held keys. For Logical Standby, the logs may be mined either on the source or the target database, thus the wallet must be the same for both databases. Encrypted columns are handled the same way in both Streams and the Streams-based Change Data Capture. The redo records are mined at the source, where the wallet exists. The data is transmitted unencrypted to the target and encrypted using the wallet at the target. The data can be encrypted in transit by using Advanced Security Option to provide network encryption.
518
Using Tablespace Encryption
Create an encrypted tablespace. Create or open the encryption wallet: Create a tablespace with the encryption keywords: SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "welcome1"; SQL> CREATE TABLESPACE encrypt_ts 2> DATAFILE '$ORACLE_HOME/dbs/encrypt.dat' SIZE 100M 3> ENCRYPTION USING '3DES168' 4> DEFAULT STORAGE (ENCRYPT); Tablespace Encryption Tablespace encryption is based on block-level encryption that encrypts on write and decrypts on read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The SQL access paths are unchanged and all data types are supported. To use tablespace encryption, the encryption wallet must be open. The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view the properties in the V$ENCRYPTED_TABLESPACES view. The encrypted data is protected during operations such as JOIN and SORT. This means that the data is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected. Encrypted tablespaces are transportable if the platforms have same endianess and the same wallet. Restrictions Temporary and undo tablespaces cannot be encrypted. (Selected blocks are encrypted.) Bfiles and external tables are not encrypted. Transportable tablespaces across different endian platforms are not supported. The key for an encrypted tablespaces cannot be changed at this time. A workaround is: create a tablespace with the desired properties and move all objects to the new tablespace.
519
Hardware Security Module
Encrypt and decrypt operations are performed on the hardware security module. Hardware Security Module Encrypted data Hardware Security Module A Hardware Security Module (HSM) is a physical device that provides secure storage for encryption keys. It also provides secure computational space (memory) to perform encryption and decryption operations. HSM is a more secure alternative to the Oracle wallet. Transparent Data Encryption (TDE) can use HSM to provide enhanced security for sensitive data. An HSM is used to store the master encryption key used for TDE. The key is secure from unauthorized access attempts because the HSM is a physical device and not an operating system file. All encryption and decryption operations that use the master encryption key are performed inside the HSM. This means that the master encryption key is never exposed in insecure memory. There are several vendors that provide Hardware Security Modules. The vendor must also supply the appropriate libraries. Client Database server
520
Encryption for LOB Columns
CREATE TABLE test1 (doc CLOB ENCRYPT USING 'AES128') LOB(doc) STORE AS SECUREFILE (CACHE NOLOGGING ); LOB encryption is allowed only for SECUREFILE LOBs. All LOBs in the LOB column are encrypted. LOBs can be encrypted on per-column or per-partition basis. Allows for the coexistence of SECUREFILE and BASICFILE LOBs Encryption for LOB Columns Oracle Database 11g introduces a completely reengineered large object (LOB) data type that dramatically improves performance, manageability, and ease of application development. This Secure Files implementation (of LOBs) offers advanced, next-generation functionality such as intelligent compression and transparent encryption. The encrypted data in Secure Files is stored in-place and is available for random reads and writes. You must create the LOB with the SECUREFILE parameter, with encryption enabled (ENCRYPT) or disabled (DECRYPT—the default) on the LOB column. The current TDE syntax is used for extending encryption to LOB data types. LOB implementation from earlier versions is still supported for backward compatibility and is now referred to as Basic Files. If you add a LOB column to a table, you can specify whether it should be created as SECUREFILES or BASICFILES. To ensure backward compatibility, the default LOB type is BASICFILES. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES192. Note: For further discussion on Secure Files, see the lesson titled “Oracle SecureFiles.”
521
Enterprise Manager Security Management
Manage security through EM. Policy Manager replaced for: Virtual Private Database Application Context Oracle Label Security Enterprise User Security pages added TDE pages added Enterprise Manager Security Management Security management has been integrated into Enterprise Manager. The Policy Manager Java console–based tool has been superseded. Oracle Label Security, Application Context, and Virtual Private Database previously administered through the Oracle Policy Manager tool are now managed through the Enterprise Manager. The Oracle Policy Manager tool is still available. The Enterprise Manager Security tool has been superseded by Enterprise Manager features. Enterprise User Security is also now managed though Enterprise Manager. The menu item for Enterprise Manager appears as soon as the ldap.ora file is configured. See the Enterprise User Administrator’s Guide for configuration details. The Enterprise Security Manager tool is still available. TDE can now be managed through Enterprise Manager, including wallet management. You can create, open, and close the wallet from Enterprise Manager pages.
522
Using RMAN Security Enhancements
Configure backup shredding: Use backup shredding: RMAN> CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON; RMAN> DELETE FORCE; Using RMAN Security Enhancements Backup shredding is a key management feature that allows the DBA to delete the encryption key of transparent encrypted backups, without physical access to the backup media. The encrypted backups are rendered inaccessible if the encryption key is destroyed. This does not apply to password-protected backups. Configure backup shredding with: CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON; Or SET ENCRYPTION EXTERNAL KEY STORAGE ON; The default setting is OFF, and backup shredding is not enabled. To shred a backup, no new command is needed, simply use: DELETE FORCE;
523
Managing Fine-Grained Access to External Network Services
1. Create an ACL and its privileges: BEGIN DBMS_NETWORK_ACL_ADMIN.CREATE_ACL ( acl => 'us-oracle-com-permissions.xml', description => 'Permissions for oracle network', principal => 'SCOTT', is_grant => TRUE, privilege => 'connect'); END; Managing Fine-Grained Access to External Network Services The network utility family of PL/SQL packages, such as UTL_TCP, UTL_INADDR, UTL_HTTP, UTL_SMTP, and UTL_MAIL, allow Oracle users to make network callouts from the database using raw TCP or using higher-level protocols built on raw TCP. A user either did or did not have the EXECUTE privilege on these packages and there was no control over which network hosts were accessed. The new package DBMS_NETWORK_ACL_ADMIN allows fine-grained control using access control lists (ACL) implemented by XML DB. 1. Create an access control list (ACL). The ACL is a list of users and privileges held in an XML file. The XML document named in the acl parameter is relative to the /sys/acl/ folder in XML DB. In the example given in the slide, SCOTT is granted connect. The username is case-sensitive in the ACL and must match the username of the session. There are only resolve and connect privileges. The connect privilege implies resolve. Optional parameters can specify a start and end time stamp for these privileges. To add more users and privileges to this ACL, use the ADD_PRIVILEGE procedure.
524
Managing Fine-Grained Access to External Network Services
2. Assign an ACL to one or more network hosts: BEGIN DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL ( acl => 'us-oracle-com-permissions.xml', host => '*.us.oracle.com', lower_port => 80, upper_port => null); END Managing Fine-Grained Access to External Network Services (continued) 2. Assign an ACL to one or more network hosts. The ASSIGN_ACL procedure associates the ACL with a network host and, optionally, a port or range of ports. In the example, the host parameter allows wildcard characters for the host name to assign the ACL to all the hosts of a domain. The use of wildcard characters affects the order of precedence for the evaluation of the ACL. Fully qualified host names with ports are evaluated before hosts with ports. Fully qualified host names are evaluated before partial domain names, and subdomains are evaluated before the top-level domains. Multiple hosts can be assigned to the same ACL and multiple users can be added to the same ACL in any order after the ACL has been created.
525
Summary In this lesson, you should have learned how to:
Configure the password file to use case-sensitive passwords Encrypt a tablespace Configure fine-grained access to network services
526
Practice 14: Overview This practice covers the following topics:
Changing the use of case-sensitive passwords Implementing a password complexity function Encrypting a tablespace
527
Consolidated Secure Management of Data
Oracle SecureFiles Consolidated Secure Management of Data
528
Objectives After completing this lesson, you should be able to:
Describe how SecureFiles enhances the performance of large object (LOB) data types Use SQL and PL/SQL APIs to access SecureFiles
529
Managing Enterprise Information
Organizations need to efficiently and securely manage many types of data: Structured: Simple data, object-relational data Semi-structured: XML documents, Word-processing documents Unstructured: Media, medical data, imaging PDF Managing Enterprise Information Today, applications must deal with many kinds of data, broadly classified as structured, semi-structured, and unstructured data. The features of large objects (LOBs) allow you to store all these kinds of data in the database as well as in operating system (OS) files that are accessed from the database. The simplicity and performance of file systems have made it attractive to store file data in file systems, while keeping object-relational data in a relational database. Structured Semistructured Unstructured
530
Problems with Existing LOB Implementation
Limitations in LOB sizing Considered mostly “write once, read many times” data Offered low concurrency of DMLs User-defined version control Uniform CHUNK size Affecting fragmentation Upper size limit Scalability issues with Oracle Real Application Clusters (RAC) Problems with Existing LOB Implementation In Oracle8i, LOB design decisions were made with the following assumptions: LOB instantiation was expected to be several megabytes in size. LOBs were considered mostly “write once, read many times” type of data. Updates would be rare; therefore, you could version entire chunks for all kinds of updates—large or small. Few batch processes were expected to stream data. An online transaction processing (OLTP) kind of workload was not anticipated. The amount of undo retained is user-controlled with two parameters PCTVERSION and RETENTION. This is an additional management burden. The CHUNK size is a static parameter under the assumption that LOB sizes are typically uniform. There is an upper limit of 32 KB on CHUNK size. High concurrency writes in Oracle RAC was not anticipated. Since their initial implementation, business requirements have dramatically changed. LOBs are now being used in a manner similar to that of relational data, storing semi-structured and unstructured data of all possible sizes. The size of the data can vary from a few kilobytes for an HTML link to several terabytes for streaming video. Oracle file systems that store all the file system data in LOBs experience OLTP-like high concurrency access. As Oracle RAC is being more widely adopted, the scalability issues of Oracle RAC must be addressed. The existing design of LOB space structures does not cater to these new requirements.
531
Oracle SecureFiles Oracle SecureFiles rearchitects the handling of unstructured (file) data, offering entirely new: Disk format Variable chunk size Network protocol Improved I/O Versioning and sharing mechanisms Redo and undo algorithms No user configuration Space and memory enhancements Oracle SecureFiles Oracle Database 11g completely reengineers the LOB data type as Oracle SecureFiles, dramatically improving the performance, manageability, and ease of application development. The new implementation also offers advanced, next-generation functionality such as intelligent compression and transparent encryption. With SecureFiles, chunks vary in size from Oracle data block size up to 64 MB. The Oracle database attempts to colocate data in physically adjacent locations on disk, thereby minimizing internal fragmentation. By using variable chunk sizes, SecureFiles avoids versioning of large, unnecessary blocks of LOB data. SecureFiles also offer a new client/server network layer allowing for high-speed data transfer between the client and server supporting significantly higher read and write performance. SecureFiles automatically determines the most efficient way for generating redo and undo, eliminating user-defined parameters. SecureFiles automatically determines whether to generate redo and undo for only the change, or create a new version by generating a full redo record. SecureFiles is designed to be intelligent and self-adaptable as it maintains different in-memory statistics that help in efficient memory and space allocation. This provides for easier manageability due to lower number of tunable parameters that are harder to tune with unpredictable loads.
532
Enabling SecureFiles Storage
SecureFiles storage can be enabled: Using the DB_SECUREFILE initialization parameter, which can have the following values: ALWAYS | FORCE | PERMITTED | NEVER | IGNORE Using Enterprise Manager: Using the ALTER SESSION | SYSTEM command: Enabling SecureFiles Storage The DB_SECUREFILE initialization parameter allows database administrators (DBAs) to determine the usage of SecureFiles, where valid values are: ALWAYS: Attempts to create all LOBs as SecureFile LOBs but creates any LOBs not in Automatic Segment Space Management (ASSM) tablespaces as BasicFile LOBs FORCE: Forces all LOBs created going forward to be SecureFile LOBs PERMITTED: Allows SecureFiles to be created (default) NEVER: Disallows SecureFiles from being created going forward IGNORE: Disallows SecureFiles and ignores any errors that would otherwise be caused by forcing BasicFiles with SecureFiles options If NEVER is specified, any LOBs that are specified as SecureFiles are created as BasicFiles. All SecureFiles-specific storage options and features (for example, compression, encryption, and deduplication) cause an exception if used against BasicFiles. BasicFiles defaults are used for any storage options not specified. If ALWAYS is specified, all LOBs created in the system are created as SecureFiles. The LOB must be created in an ASSM tablespace, otherwise an error occurs. Any BasicFiles storage options specified are ignored. The SecureFiles defaults for all storage can be changed using the ALTER SYSTEM command as shown in the slide. You can also use Enterprise Manager to set the parameter from the Server tab > Initialization Parameters link. SQL> ALTER SYSTEM SET db_securefile = 'ALWAYS';
533
SecureFiles: Advanced Features
Oracle SecureFiles offers the following advanced capabilities: Intelligent LOB compression Deduplication Transparent encryption These capabilities leverage the security, reliability, and scalability of the database. SecureFiles: Advanced Features Oracle SecureFiles implementation also offers advanced, next-generation functionality such as intelligent compression and transparent encryption. Compression enables you to explicitly compress SecureFiles. SecureFiles transparently uncompresses only the required set of data blocks for random read or write access, automatically maintaining the mapping between uncompressed and compressed offsets. If the compression level is changed from MEDIUM to HIGH, the mapping is automatically updated to reflect the new compression algorithm. Deduplication automatically detects duplicate SecureFile LOB data and conserves space by storing only one copy—implementing disk storage, I/O, and redo logging savings. Deduplication can be specified at the table level or partition level and does not span across partitioned LOBs. Deduplication requires the Advanced Compression Option. Encrypted LOB data is now stored in place and is available for random reads and writes offering enhanced data security. SecureFile LOBs can be encrypted only on a per-column basis (same as Transparent Data Encryption). All partitions within a LOB column are encrypted using the same encryption algorithm. BasicFiles data cannot be encrypted. SecureFiles supports the industry- standard encryption algorithms: 3DES168, AES128, AES192 (default), and AES256. Encryption is part of the Advanced Security Option. Note: The COMPATIBLE initialization parameter must be set to or later to use SecureFiles. The BasicFiles (previous LOB) format is still supported under compatibility. There is no downgrade capability after is set.
534
SecureFiles: Storage Options
MAXSIZE: Specifies the maximum LOB segment size RETENTION: Specifies the retention policy to use MAX: Keep old versions until MAXSIZE is reached. MIN: Keep old versions at least MIN seconds. AUTO: Default NONE: Reuse old versions as much as possible. The following storage clauses do not apply to SecureFiles: CHUNK, PCTVERSION, FREEPOOLS, FREELISTS, and FREELIST GROUPS SecureFiles: Storage Options MAXSIZE is a new storage clause governing the physical storage attribute for SecureFiles. MAXSIZE specifies the maximum segment size related to the storage clause level. RETENTION signifies the following for SecureFiles: MAX is used to start reclaiming old versions after segment MAXSIZE is reached. MIN keeps old versions for the specified least amount of time. AUTO is the default setting, which is basically a trade-off between space and time. This is automatically determined. NONE reuses old versions as much as possible. Altering the RETENTION with the ALTER TABLE statement affects the space created only after the statement is executed. For SecureFiles, you no longer need to specify CHUNK, PCTVERSION, FREEPOOLS, FREELISTS, and FREELIST GROUPS. For compatibility with existing scripts, these clauses are parsed but not interpreted.
535
Creating SecureFiles CREATE TABLE func_spec(
id number, doc CLOB ENCRYPT USING 'AES128' ) LOB(doc) STORE AS SECUREFILE (DEDUPLICATE LOB CACHE NOLOGGING); CREATE TABLE test_spec ( id number, doc CLOB) LOB(doc) STORE AS SECUREFILE (COMPRESS HIGH KEEP_DUPLICATES CACHE NOLOGGING); CREATE TABLE design_spec (id number, doc CLOB) LOB(doc) STORE AS SECUREFILE (ENCRYPT); CREATE TABLE design_spec (id number, doc CLOB ENCRYPT) LOB(doc) STORE AS SECUREFILE; Creating SecureFiles You create SecureFiles with the storage keyword SECUREFILE in the CREATE TABLE statement with a LOB column. The LOB implementation available in prior database versions is still supported for backward compatibility and is now referred to as BasicFiles. If you add a LOB column to a table, you can specify whether it should be created as SecureFiles or BasicFiles. If you do not specify the storage type, the LOB is created as BasicFiles to ensure backward compatibility. In the first example in the slide, you create a table called FUNC_SPEC to store documents as SecureFiles. Here you are specifying that you do not want duplicates stored for the LOB, that the LOB should be cached when read, and that redo should not be generated when updates are performed to the LOB. In addition, you are specifying that the documents stored in the doc column should be encrypted using the AES128 encryption algorithm. KEEP_DUPLICATE is the opposite of DEDUPLICATE, and can be used in an ALTER statement. In the third example in the slide, you are creating a table called DESIGN_SPEC that stores documents as SecureFiles. For this table you have specified that duplicates may be stored, and that the LOBs should be stored in compressed format and should be cached but not logged. Default compression is MEDIUM, which is the default. The compression algorithm is implemented on the server-side, which allows for random reads and writes to LOB data. That property can also be changed via ALTER statements.
536
Creating SecureFiles Using Enterprise Manager
You can use Enterprise Manager to create SecureFiles from the Schema tab > Tables link. After you click the Create button, you can click the Advanced Attributes button against the column you are storing as a SecureFile, to enter any SecureFiles options. The LOB implementation available in prior versions is still supported for backward compatibility reasons and is now referred to as BasicFiles. If you add a LOB column to a table, you can specify whether it should be created as a SecureFile or a BasicFile. If you do not specify the storage type, the LOB is created as a BasicFile to ensure backward compatibility. You can select the following as values for the Cache option: CACHE: Oracle places LOB pages in the buffer cache for faster access. NOCACHE: As a parameter in the STORE AS clause, NOCACHE specifies that LOB values are not brought into the buffer cache. CACHE READS: LOB values are brought into the buffer cache only during read and not during write operations. NOCACHE is the default for both SecureFile and BasicFile LOBs.
537
Shared I/O Pool buffers
LOB Cache Direct I/O Shared I/O Pool Buffer Cache Shared I/O Pool The Shared I/O Pool memory component is added in Oracle Database 11g to support large I/Os from shared memory, as opposed to Program Global Area (PGA), for direct path access. This is only when SecureFiles are created as NOCACHE (the default). The Shared I/O Pool defaults to zero in size and, only if there is SecureFiles NOCACHE workload, the system increases its size to 4% of cache. Because this is a shared resource, it may get used by large concurrent SecureFiles workloads. Unlike other pools, such as the large pool or shared pool, the user process will not generate a ORA error but will fall back to the PGA temporarily until more shared I/O pool buffers get freed. The LOB Cache is a new component in the SecureFiles architecture, improving LOB access performance by gathering and batching data as well as overlapping network and disk I/O. The LOB Cache borrows memory from the buffer cache—either regular buffers or memory from the Shared I/O Pool. Because memory borrowed from buffer cache buffers is naturally suitable for doing database I/Os as well as suitable for injecting back into the buffer cache after I/Os have been done, unnecessary copying of memory can be avoided. In multi-instance Oracle Real Application Clusters, the LOB Cache holds one single lock for each of the LOB accessed. Block-size buffers Memory Allocation Shared I/O Pool buffers
538
Altering SecureFiles ALTER TABLE t1 MODIFY LOB(a) ( KEEP_DUPLICATES );
ALTER TABLE t1 MODIFY LOB(a) ( DEDUPLICATE LOB ); ALTER TABLE t1 MODIFY PARTITION p1 LOB(a) ( DEDUPLICATE LOB ); Disable deduplication. Enable deduplication. Enable partition deduplication. ALTER TABLE t1 MODIFY LOB(a) ( NOCOMPRESS ); ALTER TABLE t1 MODIFY LOB(a) (COMPRESS HIGH); ALTER TABLE t1 MODIFY PARTITION p1 LOB(a) ( COMPRESS HIGH ); Disable compression. Enable compression. Enable compression on SecureFiles within a single partition. ALTER TABLE t1 MODIFY ( a CLOB ENCRYPT USING '3DES168'); ALTER TABLE t1 MODIFY PARTITION p1 ( LOB(a) ( ENCRYPT ); ALTER TABLE t1 MODIFY ( a CLOB ENCRYPT IDENTIFIED BY ghYtp); Enable encryption using 3DES168. Enable encryption on partition. Altering SecureFiles Using the DEDUPLICATE option, you can specify that LOB data that is identical in two or more rows in a LOB column should share the same data blocks. The opposite of this is KEEP_DUPLICATES. Oracle uses a secure hash index to detect duplication and combines LOBs with identical content into a single copy, reducing storage and simplifying storage management. The LOB keyword is optional and is for syntactic clarity only. The COMPRESS or NOCOMPRESS keywords enable or disable LOB compression, respectively. All LOBs in the LOB segment are altered with the new compression setting. The ENCRYPT or DECRYPT keyword turns on or off LOB encryption using Transparent Data Encryption (TDE). All LOBs in the LOB segment are altered with the new setting. A LOB segment can be altered only to enable or disable LOB encryption. That is, ALTER cannot be used to update the encryption algorithm or the encryption key. The encryption algorithm or encryption key can be updated using the ALTER TABLE REKEY syntax. Encryption is done at the block level allowing for better performance (smallest encryption amount possible) when combined with other options. Note: For a full description of the options available for the ALTER TABLE statement, see the Oracle Database SQL Reference. Enable encryption and build the encryption key using a password.
539
Accessing SecureFiles Metadata
The data layer interface is the same as with BasicFiles. SecureFiles DBMS_LOB GETOPTIONS() SETOPTIONS GET_DEDUPLICATE_REGIONS DBMS_SPACE.SPACE_USAGE Accessing SecureFiles Metadata DBMS_LOB package: LOBs inherit the LOB column settings for deduplication, encryption, and compression, which can also be configured on a per-LOB level using the LOB locator API. However, the LONG API cannot be used to configure these LOB settings. You must use the following DBMS_LOB package additions for these features: DBMS_LOB.GETOPTIONS: Settings can be obtained using this function. An integer corresponding to a predefined constant based on the option type is returned. DBMS_LOB.SETOPTIONS: This procedure sets features and allows the features to be set on a per-LOB basis, overriding the default LOB settings. It incurs a round-trip to the server to make the changes persistent. DBMS_LOB.GET_DEDUPLICATE_REGIONS: This procedure outputs a collection of records identifying the deduplicated regions in a LOB. LOB-level deduplication contains only a single deduplicated region. DBMS_SPACE.SPACE_USAGE: The existing SPACE_USAGE procedure is overloaded to return information about LOB space usage. It returns the amount of disk space in blocks used by all the LOBs in the LOB segment. This procedure can be used only on tablespaces created with ASSM and does not treat LOB chunks belonging to BasicFiles as used space. Note: For further details, see the Oracle Database PL/SQL Packages and Types Reference.
540
Migrating to SecureFiles
Use online redefinition SecureFiles Migrating to SecureFiles A superset of LOB interfaces allows easy migration from BasicFile LOBs. The two recommended methods for migration to SecureFiles are partition exchange and online redefinition. Partition Exchange Needs additional space equal to the largest of the partitions in the table Can maintain indexes during the exchange Can spread the workload out over several smaller maintenance windows Requires that the table or partition needs to be offline to perform the exchange Online Redefinition (recommended practice) No need to take the table or partition offline Can be done in parallel Requires additional storage equal to the entire table and all LOB segments to be available Requires that any global indexes be rebuilt These solutions generally mean using twice the disk space used by the data in the input LOB column. However, using partitioning and taking these actions on a partition-by-partition basis may help lower the disk space required.
541
SecureFiles Migration: Example
create table tab1 (id number not null, c clob) partition by range(id) (partition p1 values less than (100) tablespace tbs1 lob(c) store as lobp1, partition p2 values less than (200) tablespace tbs2 lob(c) store as lobp2, partition p3 values less than (300) tablespace tbs3 lob(c) store as lobp3); Insert your data. create table tab1_tmp (id number not null, c clob) partition by range(id) (partition p1 values less than (100) tablespace tbs1 lob(c) store as lobp1, partition p2 values less than (200) tablespace tbs2 lob(c) store as lobp2, partition p3 values less than (300) tablespace tbs3 lob(c) store as lobp3); begin dbms_redefinition.start_redef_table('scott','tab1','tab1_tmp','id id, c c'); dbms_redefinition.copy_table_dependents('scott','tab1','tab1_tmp',1, true,true,true,false,error_count); dbms_redefinition.finish_redef_table('scott','tab1','tab1_tmp'); end; SecureFiles Migration: Example The example in the slide can be used to migrate BasicFile LOBs to SecureFile LOBs. First, you create your table using BasicFiles. The example uses a partitioned table. Then, you insert data in your table. Following this, you create a transient table that has the same number of partitions but this time using SecureFiles. Note that this transient table has the same columns and types. The last section demonstrates the redefinition of your table using the previously created transient table with the DBMS_REDEFINITION procedure.
542
SecureFiles Monitoring
The following views have been modified to show SecureFiles usage: *_SEGMENTS *_LOBS *_LOB_PARTITIONS *_PART_LOBS SQL> SELECT segment_name, segment_type, segment_subtype 2 FROM dba_segments 3 WHERE tablespace_name = 'SECF_TBS2' 4 AND segment_type = 'LOBSEGMENT' 5 / SEGMENT_NAME SEGMENT_TYPE SEGMENT_SU SYS_LOB C00004$$ LOBSEGMENT SECUREFILE
543
Summary In this lesson, you should have learned how to use:
SecureFiles to improve LOB performance SQL and PL/SQL APIs to access SecureFiles
544
Practice 15: Overview This practice covers exploring the advantages of using SecureFiles for compression, data encryption, and performance.
545
Miscellaneous New Features
546
Objectives After completing this lesson, you should be able to:
Describe enhancements to locking mechanisms Use the SQL query result cache Use the enhanced PL/SQL recompilation mechanism Create and use invisible indexes Describe Adaptive Cursor Sharing Manage your SPFILE
547
Foreground Statistics
New columns report foreground-only statistics: V$SYSTEM_EVENT: TOTAL_WAITS_FG TOTAL_TIMEOUTS_FG TIME_WAITED_FG AVERAGE_WAIT_FG TIME_WAITED_MICRO_FG V$SYSTEM_WAIT_CLASS: Foreground Statistics New columns have been added to the V$SYSTEM_EVENT and the V$SYSTEM_WAIT_CLASS views that allow you to easily identify events that are caused by foreground or background processes. V$SYSTEM_EVENT has five new NUMBER columns that represent the statistics from purely foreground sessions: TOTAL_WAITS_FG TOTAL_TIMEOUTS_FG TIME_WAITED_FG AVERAGE_WAIT_FG TIME_WAITED_MICRO_FG V$SYSTEM_WAIT_CLASS has two new NUMBER columns that represent the statistics from purely foreground sessions:
548
Online Redefinition Enhancements
Online table redefinition supports the following: Tables with materialized views and view logs Triggers with ordering dependency Online redefinition does not systematically invalidate dependent objects. Online Redefinition Enhancements Oracle Database 11g supports online redefinition for tables with materialized views and view logs. In addition, online redefinition supports triggers with the FOLLOWS or PRECEDES clause, which establishes an ordering dependency between the triggers. In previous database versions, all directly and indirectly dependent views and PL/SQL packages would be invalidated after an online redefinition or other DDL operations. These views and PL/SQL packages would automatically be recompiled whenever they are next invoked. If there are a lot of dependent PL/SQL packages and views, the cost of the revalidation or recompilation can be significant. In Oracle Database 11g, views, synonyms, and other table-dependent objects (with the exception of triggers) that are not logically affected by the redefinition, are not invalidated. So, for example, if referenced column names and types are the same after the redefinition, then they are not invalidated. This optimization is “transparent,” that is, it is turned on by default. Another example: If the redefinition drops a column, only those procedures and views that reference the column are invalidated. The other dependent procedures and views remain valid. Note that all triggers on a table being redefined are invalidated (as the redefinition can potentially change the internal column numbers and data types), but they are automatically revalidated with the next DML execution to the table.
549
Minimizing Dependent Recompilations
Adding a column to a table does not invalidate its dependent objects. Adding a PL/SQL unit to a package does not invalidate dependent objects. Fine-grain dependencies are tracked automatically. No configuration is required. Minimizing Dependent Recompilations Starting with Oracle Database 11g, you have access to records that describe more precise dependency metadata. This is called fine-grain dependencies and it is automatically on. Earlier Oracle Database releases record dependency metadata—for example, that PL/SQL unit P depends on PL/SQL unit F, or that the view V depends on the table T—with the precision of the whole object. This means that dependent objects are sometimes invalidated without logical requirement. For example, if the V view depends only on the A and B columns in the T table, and column D is added to the T table, the validity of the V view is not logically affected. Nevertheless, before Oracle Database Release 11.1, the V view is invalidated by the addition of the D column to the T table. With Oracle Database Release 11.1, adding the D column to the T table does not invalidate the V view. Similarly, if procedure P depends only on elements E1 and E2 within a package, adding the E99 element (to the end of a package to avoid changing slot numbers or entry point numbers of existing top-level elements) to the package does not invalidate the P procedure. Reducing the invalidation of dependent objects in response to changes to the objects on which they depend increases application availability, both in the development environment and during online application upgrade.
550
Locking Enhancements DDL commands can now wait for DML locks to be released: DDL_LOCK_TIMEOUT initialization parameter New WAIT [<timeout>] clause for the LOCK TABLE command The following commands will no longer acquire exclusive locks (X), but shared exclusive locks (SX): CREATE INDEX ONLINE CREATE MATERIALIZED VIEW LOG ALTER TABLE ENABLE CONSTRAINT NOVALIDATE Locking Enhancements You can limit the time that DDL commands wait for DML locks before failing by setting the DDL_LOCK_TIMEOUT parameter at the system or session level. This initialization parameter is set by default to 0, that is NOWAIT, which ensures backward compatibility. The range of values is 0–100,000 (in seconds). The LOCK TABLE command has new syntax that you can use to specify the maximum number of seconds the statement should wait to obtain a DML lock on the table. Use the WAIT clause to indicate that the LOCK TABLE statement should wait up to the specified number of seconds to acquire a DML lock. There is no limit on the value of the integer. In highly concurrent environments, the requirement of acquiring an exclusive lock—for example, at the end of an online index creation and rebuild—could lead to a spike of waiting DML operations and, therefore, a short drop and spike of system usage. While this is not an overall problem for the database, this anomaly in system usage could trigger operating system alarm levels. The commands listed in the slide no longer require exclusive locks.
551
Invisible Index: Overview
Use index Do not use index Optimizer view point VISIBLE Index INVISIBLE Index OPTIMIZER_USE_INVISIBLE_INDEXES=FALSE Data view point Invisible Index: Overview Beginning with Release 11g, you can create invisible indexes. An invisible index is an index that is ignored by the optimizer unless you explicitly set the OPTIMIZER_USE_INVISIBLE_INDEXES initialization parameter to TRUE at the session or system level. The default value for this parameter is FALSE. Making an index invisible is an alternative to making it unusable or dropping it. Using invisible indexes, you can do the following: Test the removal of an index before dropping it. Use temporary index structures for certain operations or modules of an application without affecting the overall application. Unlike unusable indexes, an invisible index is maintained during DML statements. Update index Update index Update table Update table
552
Invisible Indexes: Examples
Index is altered as not visible to the optimizer: Optimizer does not consider this index: Optimizer will always consider the index: Creating an index as invisible initially: ALTER INDEX ind1 INVISIBLE; SELECT /*+ index(TAB1 IND1) */ COL1 FROM TAB1 WHERE …; ALTER INDEX ind1 VISIBLE; Invisible Indexes: Examples When an index is invisible, the optimizer generates plans that do not use the index. If there is no discernible drop in performance, you can then drop the index. You can also create an index initially as invisible, perform testing, and then determine whether to make the index visible. You can query the VISIBILITY column of the *_INDEXES data dictionary views to determine whether the index is VISIBLE or INVISIBLE. Note: For all the statements given in the slide, it is assumed that OPTIMIZER_USE_INVISIBLE_INDEXES is set to FALSE. CREATE INDEX IND1 ON TAB1(COL1) INVISIBLE;
553
SQL Query Result Cache: Overview
Cache the result of a query or query block for future reuse. Cache is used across statements and sessions unless it is stale. Benefits: Scalability Reduction of memory usage Good candidate statements: Access many rows Return few rows SQL Query Result Cache 2 3 Session 1 SELECT … SELECT … Session 2 SQL Query Result Cache: Overview The SQL query result cache enables explicit caching of query result sets and query fragments in database memory. A dedicated memory buffer stored in the shared pool can be used for storing and retrieving the cached results. The query results stored in this cache become invalid when data in the database objects being accessed by the query is modified. Although the SQL query cache can be used for any query, good candidate statements are the ones that need to access a very high number of rows to return only a fraction of them. This is mostly the case for data warehousing applications. In the graphic shown in the slide, if the first session executes a query, it retrieves the data from the database and then caches the result in the SQL query result cache. If a second session executes the exact same query, it retrieves the result directly from the cache instead of using the disks. Note Each node in a RAC configuration has a private result cache. Results cached on one instance cannot be used by another instance. However, invalidations work across instances. To handle all synchronization operations between RAC instances related to the SQL query result cache, the special RCBG process is used on each instance. With parallel query, entire result can be cached (in RAC it is cached on query coordinator instance) but individual parallel query processes cannot use the cache. 1
554
Setting Up SQL Query Result Cache
Set at database level using the RESULT_CACHE_MODE initialization parameter. Values: AUTO: The optimizer determines the results that need to be stored in the cache based on repetitive executions. MANUAL: Use the result_cache hint to specify results to be stored in the cache. FORCE: All results are stored in the cache. Setting Up SQL Query Result Cache The query optimizer manages the result cache mechanism depending on the settings of the RESULT_CACHE_MODE parameter in the initialization parameter file. You can use this parameter to determine whether or not the optimizer automatically sends the results of queries to the result cache. You can set the RESULT_CACHE_MODE parameter at the system, session, and table level. The possible parameter values are AUTO, MANUAL, and FORCE: When set to AUTO, the optimizer determines which results are to be stored in the cache based on repetitive executions. When set to MANUAL (the default), you must specify, by using the RESULT_CACHE hint, that a particular result is to be stored in the cache. When set to FORCE, all results are stored in the cache. Note: For both the AUTO and FORCE settings, if the statement contains a [NO_]RESULT_CACHE hint, then the hint takes precedence over the parameter setting.
555
Managing the SQL Query Result Cache
Use the following initialization parameters: RESULT_CACHE_MAX_SIZE It sets the memory allocated to the result cache. Result cache is disabled if you set the value to 0. Default is dependent on other memory settings (0.25% of memory_target or 0.5% of sga_target or 1% of shared_pool_size) Cannot be greater than 75% of shared pool RESULT_CACHE_MAX_RESULT Sets maximum cache memory for a single result Defaults to 5% RESULT_CACHE_REMOTE_EXPIRATION Sets the expiry time for cached results depending on remote database objects Defaults to 0 Managing SQL Query Results Cache You can alter various parameter settings in the initialization parameter file to manage the SQL query result cache of your database. By default, the database allocates memory for the result cache in the Shared Pool inside the SGA. The memory size allocated to the result cache depends on the memory size of the SGA as well as the memory management system. You can change the memory allocated to the result cache by setting the RESULT_CACHE_MAX_SIZE parameter. The result cache is disabled if you set its value to 0. The value of this parameter is rounded to the largest multiple of 32 KB that is not greater than the specified value. If the rounded value is 0, then the feature is disabled. Use the RESULT_CACHE_MAX_RESULT parameter to specify the maximum amount of cache memory that can be used by any single result. The default value is 5%, but you can specify any percentage value between 1 and 100. This parameter can be implemented at the system and session level. Use the RESULT_CACHE_REMOTE_EXPIRATION parameter to specify the time (in number of minutes) for which a result that depends on remote database objects remains valid. The default value is 0, which implies that results using remote objects should not be cached. Setting this parameter to a nonzero value can produce stale answers: for example, if the remote table used by a result is modified at the remote database.
556
Using the RESULT_CACHE Hint
EXPLAIN PLAN FOR SELECT /*+ RESULT_CACHE */ department_id, AVG(salary) FROM employees GROUP BY department_id; | Id | Operation | Name |Rows | 0 | SELECT STATEMENT | | 11 | 1 | RESULT CACHE | 8fpza04gtwsfr6n595au15yj4y | | 2 | HASH GROUP BY | | 11 | 3 | TABLE ACCESS FULL| EMPLOYEES | 107 SELECT /*+ NO_RESULT_CACHE */ department_id, AVG(salary) FROM employees GROUP BY department_id; Using the Result_Cache Hint If you want to use the query result cache and the RESULT_CACHE_MODE initialization parameter is set to MANUAL, you must explicitly specify the RESULT_CACHE hint in your query. This introduces the ResultCache operator into the execution plan for the query. When you execute the query, the ResultCache operator looks up the result cache memory to check whether the result for the query already exists in the cache. If it exists, then the result is retrieved directly out of the cache. If it does not yet exist in the cache, then the query is executed, the result is returned as output, and is also stored in the result cache memory. If the RESULT_CACHE_MODE initialization parameter is set to AUTO or FORCE, and you do not want to store the result of a query in the result cache, you must then use the NO_RESULT_CACHE hint in your query. For example, when the RESULT_CACHE_MODE value equals FORCE in the initialization parameter file, and you do not want to use the result cache for the EMPLOYEES table, then use the NO_RESULT_CACHE hint. Note: Use of the [NO_]RESULT_CACHE hint takes precedence over the parameter settings.
557
In-Line View: Example SELECT prod_subcategory, revenue
FROM (SELECT /*+ RESULT_CACHE */ p.prod_category, p.prod_subcategory, sum(s.amount_sold) revenue FROM products p, sales s WHERE s.prod_id = p.prod_id and s.time_id BETWEEN to_date('01-JAN-2006','dd-MON-yyyy') and to_date('31-DEC-2006','dd-MON-yyyy') GROUP BY ROLLUP(p.prod_category, p.prod_subcategory)) WHERE prod_category = 'Women'; In-Line View: Example In the example given in the slide, the RESULT_CACHE hint is used in the in-line view. In this case, the following optimizations are disabled: view merging, predicate push-down, and column projection. This is at the expense of the initial query, which might take a longer time to execute. However, subsequent executions will be much faster because of the SQL query cache. The other benefit in this case is that similar queries (queries using a different predicate value for prod_category in the last WHERE clause) will also be much faster.
558
Using the DBMS_RESULT_CACHE Package
Use the DBMS_RESULT_CACHE package to: Manage memory allocation for the query result cache View the status of the cache: Retrieve statistics on the cache memory usage: Remove all existing results and clear cache memory: Invalidate cached results depending on specified object: SELECT DBMS_RESULT_CACHE.STATUS FROM DUAL; EXECUTE DBMS_RESULT_CACHE.MEMORY_REPORT; EXECUTE DBMS_RESULT_CACHE.FLUSH; Using the DBMS_RESULT_CACHE Package The DBMS_RESULT_CACHE package provides statistics, information, and operators that enable you to manage memory allocation for the query result cache. You can use the DBMS_RESULT_CACHE package to perform various operations such as viewing the status of the cache (OPEN or CLOSED), retrieving statistics on the cache memory usage, and flushing the cache. For example, to view the memory allocation statistics, use the following SQL procedure: SQL> set serveroutput on SQL> execute dbms_result_cache.memory_report R e s u l t C a c h e M e m o r y R e p o r t [Parameters] Block Size = 1024 bytes Maximum Cache Size = bytes (704 blocks) Maximum Result Size = bytes (35 blocks) [Memory] Total Memory = bytes [0.036% of the Shared Pool] ... Fixed Memory = bytes [0.008% of the Shared Pool] ... State Object Pool = 2852 bytes [0.002% of the Shared Pool] ... Cache Memory = bytes (32 blocks) [0.025% of the Shared Pool] Unused Memory = 30 blocks Used Memory = 2 blocks Dependencies = 1 blocks Results = 1 blocks SQL = 1 blocks Note: For more information, refer to the PL/SQL Packages and Types Reference Guide. EXEC DBMS_RESULT_CACHE.INVALIDATE('JFV','MYTAB');
559
Viewing SQL Result Cache Dictionary Information
The following views provide information about the query result cache: (G)V$RESULT_CACHE_STATISTICS Lists the various cache settings and memory usage statistics (G)V$RESULT_CACHE_MEMORY Lists all the memory blocks and the corresponding statistics (G)V$RESULT_CACHE_OBJECTS Lists all the objects (cached results and dependencies) along with their attributes (G)V$RESULT_CACHE_DEPENDENCY Lists the dependency details between the cached results and dependencies Viewing SQL Result Cache Dictionary Information Note: For further information, see the Oracle Database Reference guide.
560
SQL Query Result Cache: Considerations
Result cache is disabled for queries containing: Temporary or dictionary tables Nondeterministic PL/SQL functions Sequence CURRVAL and NEXTVAL SQL functions current_date, sysdate, sys_guid, and so on DML/DDL on remote database does not expire cached results. Flashback queries can be cached. SQL Query Result Cache: Considerations Note: Any user-written function used in a function-based index must have been declared with the DETERMINISTIC keyword to indicate that the function will always return the same output value for any given set of input argument values.
561
SQL Query Result Cache: Considerations
Result cache does not automatically release memory. It grows until maximum size is reached. DBMS_RESULT_CACHE.FLUSH purges memory. Bind variables Cached result is parameterized with variable values. Cached results can only be found for the same variable values. Cached result will not be built if: Query is built on a noncurrent version of data (read consistency enforcement) Current session has outstanding transaction on table(s) in query SQL Query Result Cache: Considerations (continued) Note The purge works only if the cache is not in use; disable (close) the cache for flush to succeed. With bin variables, cached result is parameterized with variable values. Cached results can be found only for the same variable values. That is, different values or bind variable names cause cache miss.
562
OCI Client Query Cache Extends server-side query caching to client-side memory Ensures better performance by eliminating round-trips to the server Leverages client-side memory Improves server scalability by saving server CPU resources Result cache automatically refreshed if the result set is changed on the server Particularly good for lookup tables OCI Client Query Cache You can enable caching of query result sets in client memory with Oracle Call Interface (OCI) Client Query Cache in Oracle Database 11g. The cached result set data is transparently kept consistent with any changes done on the server side. Applications leveraging this feature see improved performance for queries that have a cache hit. Additionally, a query serviced by the cache avoids round trips to the server for sending the query and fetching the results. Server CPU, which would have been consumed for processing the query, is reduced thus improving server scalability. Before using client-side query cache, determine whether your application will benefit from this feature. Client-side caching is useful when you have applications that produce repeatable result sets, small result sets, static result sets, or frequently executed queries. Client and server result caches are autonomous; each can be enabled/disabled independently. Note: You can monitor the client query cache using the client_result_cache_stats$ view or v$client_result_cache_stats view.
563
Using Client-Side Query Cache
You can use client-side query caching by: Setting initialization parameters CLIENT_RESULT_CACHE_SIZE CLIENT_RESULT_CACHE_LAG Using the client configuration file OCI_RESULT_CACHE_MAX_SIZE OCI_RESULT_CACHE_MAX_RSET_SIZE OCI_RESULT_CACHE_MAX_RSET_ROWS Client result cache is then used depending on: Tables result cache mode RESULT CACHE hints in your SQL statements Using Client-Side Query Cache The following two parameters can be set in your initialization parameter file: CLIENT_RESULT_CACHE_SIZE: A nonzero value enables the client result cache. This is the maximum size of the client per-process result set cache in bytes. All OCI client processes get this maximum size and can be overridden by the OCI_RESULT_CACHE_MAX_SIZE parameter. CLIENT_RESULT_CACHE_LAG: Maximum time (in milliseconds) since the last round-trip to the server, before which the OCI client query executes a round-trip to get any database changes related to the queries cached on client. A client configuration file is optional and overrides the cache parameters set in the server initialization parameter file. Parameter values can be part of a sqlnet.ora file. When parameter values shown in the slide are specified, OCI client caching is enabled for OCI client processes using the configuration file. OCI_RESULT_CACHE_MAX_RSET_SIZE/ROWS denotes the maximum size of any result set in bytes/rows in the per-process query cache. OCI applications can use application hints to force result cache storage. This overrides the deployment time settings of ALTER TABLE/ALTER VIEW. The application hints can be: SQL hints /*+ result_cache */, and /*+ no_result_cache */ OCIStmtExecute() modes. These override SQL hints. Note: To use this feature, your applications must be relinked with Release 11.1 or higher client libraries and be connected to a Release 11.1 or higher server.
564
PL/SQL Function Cache Stores function results in cache, making them available to other sessions. Uses the Query Result Cache Cached results exec Calculate(…); First query BEGIN … SELECT … FROM table; … END; exec Calculate(…); exec Calculate(…); PL/SQL Function Cache Starting in Oracle Database 11g, you can use the PL/SQL cross-section function result caching mechanism. This caching mechanism provides you with a language-supported and system-managed means for storing the results of PL/SQL functions in a shared global area (SGA), which is available to every session that runs your application. The caching mechanism is both efficient and easy to use, and it relieves you of the burden of designing and developing your own caches and cache management policies. Oracle Database 11g provides the ability to mark a PL/SQL function to indicate that its result should be cached to allow lookup, rather than recalculation, on the next access when the same parameter values are called. This function result cache saves significant space and time. This is done transparently using the input parameters as the lookup key. The cache is instancewide so that all distinct sessions invoking the function benefit. If the result for a given set of parameters changes, you can use constructs to invalidate the cache entry so that it will be properly recalculated on the next access. This feature is especially useful when the function returns a value that is calculated from data selected from schema-level tables. For such uses, the invalidation constructs are simple and declarative. You can include syntax in the source text of a PL/SQL function to request that its results be cached and, to ensure correctness, that the cache be purged when any of a list of tables experiences DML. When a particular invocation of the result-cached function is a cache hit, then the function body is not executed; instead, the cached value is returned immediately. Subsequent queries
565
Using PL/SQL Function Cache
Include the RESULT_CACHE option in the function declaration section of a package or function definition. Optionally include the RELIES_ON clause to specify any tables or views on which the function results depend. CREATE OR REPLACE FUNCTION productName (prod_id NUMBER, lang_id VARCHAR2) RETURN NVARCHAR2 RESULT_CACHE RELIES_ON (product_descriptions) IS result VARCHAR2(50); BEGIN SELECT translated_name INTO result FROM product_descriptions WHERE product_id = prod_id AND language_id = lang_id; RETURN result; END; Using PL/SQL Function Cache In the example shown in the slide, the productName function has result caching enabled through the RESULT_CACHE option in the function declaration. In this example, the RELIES_ON clause is used to identify the PRODUCT_DESCRIPTIONS table on which the function results depend. Usage Notes If function execution results in an unhandled exception, the exception result is not stored in the cache. The body of a result-cached function executes: The first time a session on this database instance calls the function with these parameter values When the cached result for these parameter values is invalid. A cached result becomes invalid when any database object specified in the RELIES_ON clause of the function definition changes. When the cached result for these parameter values has aged out. If the system needs memory, it might discard the oldest cached values. When the function bypasses the cache The function should not have any side effects. The function should not depend on session-specific settings. The function should not depend on session-specific application contexts.
566
PL/SQL Function Cache: Considerations
PL/SQL Function Cache cannot be used when: The function is defined in a module that has the invoker’s rights or in an anonymous block. The function is a pipelined table function. The function has OUT or IN OUT parameters. The function has IN parameter of the following types: BLOB, CLOB, NCLOB, REF CURSOR, collection, object, or record. The function’s return type is: BLOB, CLOB, NCLOB, REF CURSOR, object, record, or collection with one of the preceding unsupported return types.
567
PL/SQL and Java Native Compilation Enhancements
100+% faster for pure PL/SQL or Java code 10–30% faster for typical transactions with SQL PL/SQL Just one parameter: On/Off No need for C compiler No file system DLLs Java JIT “on the fly” compilation Transparent to user (asynchronous, in background) Code stored to avoid recompilations PL/SQL and Java Native Compilation Enhancements PL/SQL Native Compilation: The Oracle executable generates native dynamic linked lists (DLLs) directly from the PL/SQL source code without needing to use a third-party C compiler. In Oracle Database 10g, the DLL is stored canonically in the database catalog. In Oracle Database 11g, when it is needed, the Oracle executable loads it directly from the catalog without needing to stage it first on the file system. The execution speed of natively compiled PL/SQL programs will never be slower than in Oracle Database 10g and may be improved in some cases by as much as an order of magnitude. The PL/SQL native compilation is automatically available with Oracle Database 11g. No third-party software (neither a C compiler nor a DLL loader) is needed. Java Native Compilation: Enabled by default (JAVA_JIT_ENABLED initialization parameter) and similar to the Java Development Kit JIT (just-in-time), this feature compiles Java in the database natively and transparently without the need of a C compiler. The JIT runs as an independent session in a dedicated Oracle server process. There is at most one compiler session per database instance; it is Oracle RAC-aware and amortized over all Java sessions. This feature brings two major benefits to Java in the database: increased performance of pure Java execution in the database and ease of use as it is activated transparently, without the need of an explicit command, when Java is executed in the database. As this feature removes the need for a C compiler, there are cost and license savings.
568
Setting Up and Testing PL/SQL Native Compilation
Set PLSQL_CODE_TYPE to NATIVE: ALTER SYSTEM | ALTER SESSION | ALTER … COMPILE Compile your PL/SQL units (example): Make sure you succeeded: CREATE OR REPLACE PROCEDURE hello_native AS BEGIN DBMS_OUTPUT.PUT_LINE('Hello world.'); END hello_native; / ALTER PROCEDURE hello_native COMPILE PLSQL_CODE_TYPE=NATIVE; SELECT plsql_code_type FROM all_plsql_object_settings WHERE name = 'HELLO_NATIVE'; Setting Up and Testing PL/SQL Native Compilation To set up and test one or more program units through native compilation: 1. Set up the PLSQL_CODE_TYPE initialization parameter. This parameter determines whether PL/SQL code is natively compiled or interpreted. The default setting is INTERPRETED, which is recommended during development. To enable PL/SQL native compilation, set the value of PLSQL_CODE_TYPE to NATIVE. Make sure that the PLSQL_OPTIMIZE_LEVEL initialization parameter is not less than 2 (which is the default). You can set PLSQL_CODE_TYPE at the system, session, or unit level. A package specification and its body can have different PLSQL_CODE_TYPE settings. 2. Compile one or more program units, using one of these methods: Use CREATE OR REPLACE to create or recompile the program unit. Use the various ALTER <PL/SQL unit type> COMPILE commands as shown in the slide example. Run one of the SQL*Plus scripts that creates a set of Oracle-supplied packages. Create a database using a preconfigured initialization file with PLSQL_CODE_TYPE=NATIVE. 3. To be sure that the process worked, query the data dictionary to see that a program unit is compiled for native execution. You can use ALL|USER_PLSQL_OBJECT_SETTINGS views. The PLSQL_CODE_TYPE column has a value of NATIVE for program units that are compiled for native execution, and INTERPRETED otherwise.
569
Recompiling the Entire Database for PL/SQL Native Compilation
Shut down the database. Set PLSQL_CODE_TYPE to NATIVE. Start up the database in UPGRADE mode. Execute the dbmsupgnv.sql script. Shut down/start up your database in restricted mode. Execute the utlrp.sql script. Disable restricted mode. Recompiling the Entire Database for PL/SQL Native Compilation If you have DBA privileges, you can recompile all PL/SQL modules in an existing database to NATIVE or INTERPRETED, using the dbmsupgnv.sql and dbmsupgin.sql scripts, respectively. To recompile all PL/SQL module to NATIVE, perform the following steps: 1. Shut down application services, the listener, and the database in normal or immediate mode. The first two are used to make sure that all of the connections to the database have been terminated. 2. Set PLSQL_CODE_TYPE to NATIVE in the initialization parameter file. The value of PLSQL_CODE_TYPE does not affect the conversion of the PL/SQL units in these steps. However, it does affect all subsequently compiled units and it should be explicitly set to the compilation type that you want. 3. Start up the database in UPGRADE mode, using the UPGRADE option. It is assumed that there are no invalid objects at this point. 4. Run the $ORACLE_HOME/rdbms/admin/dbmsupgnv.sql script as the SYS user to update the plsql_code_type setting to NATIVE in the dictionary tables for all PL/SQL units. This process also invalidates the units. Use TRUE with the script to exclude package specifications; FALSE to include the package specifications. The script is guaranteed to complete successfully or roll back all the changes. Package specifications seldom contain executable code, so the run-time benefits of compiling to NATIVE are not measurable.
570
Recompiling the Entire Database for PL/SQL Native Compilation (continued)
5. Shut down the database and restart in NORMAL mode. Oracle recommends that no other sessions be connected to avoid possible problems. You can ensure this with the following statement: ALTER SYSTEM ENABLE RESTRICTED SESSION; 6. Run the $ORACLE_HOME/rdbms/admin/utlrp.sql script as the SYS user. This script recompiles all the PL/SQL modules using a default degree of parallelism. 7. Disable the restricted session mode for the database, and then start the services that you previously shut down. To disable restricted session mode, use the following statement: ALTER SYSTEM DISABLE RESTRICTED SESSION; Note: During the conversion to native compilation, TYPE specifications are not recompiled to NATIVE because these specifications do not contain executable code. Notes only page
571
Adaptive Cursor Sharing: Overview
Adaptive Cursor Sharing allows for intelligent cursor sharing only for statements that use bind variables. Adaptive Cursor Sharing is used to compromise between cursor sharing and optimization. Adaptive Cursor Sharing benefits: Automatically detects when different executions would benefit from different execution plans Limits the number of generated child cursors to a minimum Automated mechanism that cannot be turned off One plan not always appropriate for all bind values Adaptive Cursor Sharing: Overview Bind variables were designed to allow the Oracle database to share a single cursor for multiple SQL statements to reduce the amount of shared memory used to parse SQL statements. However, cursor sharing and SQL optimization are two conflicting goals. Writing a SQL statement with literals provides more information for the optimizer and naturally leads to better execution plans, while increasing memory and CPU overhead caused by excessive hard parses. Oracle9i Database was the first attempt to introduce a compromising solution by allowing similar SQL statements using different literal values to be shared. For statements using bind variables, Oracle9i also introduced the concept of bind peeking. Using bind peeking, the optimizer looks at the bind values the first time the statement is executed. It then uses these values to determine an execution plan that will be shared by all other executions of that statement. To benefit from bind peeking, it is assumed that cursor sharing is intended and that different invocations of the statement are supposed to use the same execution plan. If different invocations of the statement would significantly benefit from different execution plans, then bind peeking is of no use in generating good execution plans. To address this issue as much as possible, Oracle Database 11g introduces Adaptive Cursor Sharing. This feature is a more sophisticated strategy designed to not share the cursor blindly, but generate multiple plans per SQL statement with bind variables if the benefit of using multiple execution plans outweighs the parse time and memory usage overhead. However, because the purpose of using bind variables is to share cursors in memory, a compromise must be found regarding the number of child cursors that need to be generated.
572
Adaptive Cursor Sharing: Architecture
Bind-sensitive cursor System observes statement for a while. 1 SELECT * FROM emp WHERE sal = :1 and dept = :2 Bind-aware cursor Initial selectivity cube Same selectivity cube No need for new plan Initial plan HJ GB S o f t P a r s e HJ GB . . 2 0.003 0.0025 0.15 0.18 :1=A & :2=B S(:1)=0.15 S(:2)=0.0025 :1=C & :2=D S(:1)=0.18 S(:2)=0.003 Merged selectivity cubes No need for new plan Second selectivity cube Need new plan H a r d s e P . HJ GB HJ GB H a r d s e P HJ GB HJ GB 0.009 . 4 3 0.004 Adaptive Cursor Sharing: Architecture Using Adaptive Cursor Sharing, the following steps take place in the scenario illustrated in the slide: 1. The cursor starts its life with a hard parse, as usual. If bind peeking takes place, and a histogram is used to compute selectivity of the predicate containing the bind variable, then the cursor is marked as a bind-sensitive cursor. In addition, some information is stored about the predicate containing the bind variables, including the predicate selectivity. In the slide example, the predicate selectivity that would be stored is a cube centered around (0.15,0.0025). Because of the initial hard parse, an initial execution plan is determined using the peeked binds. After the cursor is executed, the bind values and the execution statistics of the cursor are stored in that cursor. During the next execution of the statement when a new set of bind values is used, the system performs a usual soft parse, and finds the matching cursor for execution. At the end of execution, execution statistics are compared with the ones currently stored in the cursor. The system then observes the pattern of the statistics over all previous runs (see V$SQL_CS_… views on next slide) and decides whether or not to mark the cursor as bind-aware. 2. On the next soft parse of this query, if the cursor is now bind-aware, bind-aware cursor matching is used. Suppose that the selectivity of the predicate with the new set of bind values is now (0.18,0.003). Because selectivity is used as part of bind-aware cursor matching, and because the selectivity is within an existing cube, the statement uses the existing child cursor’s execution plan to run. Cubes merged 0.28 0.3 :1=G & :2=H S(:1)=0.28 S(:2)=0.004 :1=E & :2=F S(:1)=0.3 S(:2)=0.009
573
Notes only page Adaptive Cursor Sharing: Architecture (continued)
3. On the next soft parse of this query, suppose that the selectivity of the predicate with the new set of bind values is now (0.3,0.009). Because that selectivity is not within an existing cube, no child cursor match is found. So the system does a hard parse, which generates a new child cursor with a second execution plan in that case. In addition, the new selectivity cube is stored as part of the new child cursor. After the new child cursor executes, the system stores the bind values and execution statistics in the cursor. 4. On the next soft parse of this query, suppose that the selectivity of the predicate with the new set of bind values is now (0.28,0.004). Because that selectivity is not within one of the existing cubes, the system does a hard parse. Suppose that this time, the hard parse generates the same execution plan as the first one. Because the plan is the same as the first child cursor, both child cursors are merged. That is, both cubes are merged into a new bigger cube, and one of the child cursor is deleted. The next time there is a soft parse, if the selectivity falls within the new cube, the child cursor will match. Notes only page
574
Adaptive Cursor Sharing Views
The following views provide information about Adaptive Cursor Sharing usage: V$SQL Two new columns show whether a cursor is bind-sensitive or bind-aware. V$SQL_CS_HISTOGRAM Shows the distribution of the execution count across the execution history histogram. V$SQL_CS_SELECTIVITY Shows the selectivity cubes stored for every predicate containing a bind variable and whose selectivity is used in the cursor sharing checks. V$SQL_CS_STATISTICS Shows execution statistics of a cursor using different bind sets. Adaptive Cursor Sharing Views These views determine whether a query is bind-aware or not, and is handled automatically, without any user input. However, information about what is going on is exposed through V$ views so that the DBA can diagnose any problems. Two new columns have been added to V$SQL: IS_BIND_SENSITIVE: Indicates if a cursor is bind-sensitive; value YES | NO. A query for which the optimizer peeked at bind variable values when computing predicate selectivities and where a change in a bind variable value may lead to a different plan is called bind-sensitive. IS_BIND_AWARE: Indicates if a cursor is bind-aware; value YES | NO. A cursor in the cursor cache that has been marked to use bind-aware cursor sharing is called bind-aware. V$SQL_CS_HISTOGRAM: Shows the distribution of the execution count across a three-bucket execution history histogram. V$SQL_CS_SELECTIVITY: Shows the selectivity cubes or ranges stored in a cursor for every predicate containing a bind variable and whose selectivity is used in the cursor sharing checks. It contains the text of the predicates and the selectivity range low and high values. V$SQL_CS_STATISTICS: Adaptive Cursor Sharing monitors execution of a query and collects information about it for a while, and uses this information to decide whether to switch to using bind-aware cursor sharing for the query. This view summarizes the information that it collects to make this decision: for a sample of executions, it keeps track of rows processed, buffer gets, and CPU time. The PEEKED column has the value YES if the bind set was used to build the cursor, and NO otherwise.
575
Interacting with Adaptive Cursor Sharing
If CURSOR_SHARING <> EXACT, then statements containing literals may be rewritten using bind variables. If statements are rewritten, Adaptive Cursor Sharing may apply to them. SQL Plan Management (SPM): If OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES is set to TRUE, then only the first generated plan is used. As a workaround, set this parameter to FALSE, and run your application until all plans are loaded in the cursor cache. Manually load the cursor cache into the corresponding plan baseline. Interacting with Adaptive Cursor Sharing Adaptive Cursor Sharing is independent of the CURSOR_SHARING parameter. The setting of this parameter determines whether literals are replaced by system-generated bind variables. If they are, then Adaptive Cursor Sharing behaves just as it would if the user supplied binds to begin with. When using the SPM automatic plan capture, the first plan captured for a SQL statement with bind variables is marked as the corresponding SQL plan baseline. If another plan is found for that same SQL statement (which maybe the case with Adaptive Cursor Sharing), it is added to the SQL statements plan history and marked for verification: It will not be used. So even though Adaptive Cursor Sharing has come up with a new plan based on a new set of bind values, SPM does not let it be used until the plan has been verified. Thus reverting back to10g behavior, only the plan generated based on the first set of bind values will be used by all subsequent executions of the statement. One possible workaround is to run the system for some time with automatic plan capture set to false, and after the cursor cache has been populated with all of the plans a SQL statement with bind will have, load the entire plan directly from the cursor cache into the corresponding SQL plan baseline. By doing this, all the plans for a single SQL statement are marked as SQL baseline plans by default.
576
Temporary Tablespace Shrink
Sort segment extents are managed in memory after being physically allocated. This method can be an issue after big sorts are done. To release physical space from your disks, you can shrink temporary tablespaces: Locally managed temporary tablespaces Online operation CREATE TEMPORARY TABLESPACE temp TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m; Temporary Tablespace Shrink Huge sorting operations can cause temporary tablespace to grow a lot. For performance reasons, after a sort extent is physically allocated, it is managed in memory to avoid physical deallocation later. As a result, you can end up with a huge tempfile that stays on disk until it is dropped. One possible workaround is to create a new temporary tablespace with smaller size, set this new tablespace as default temporary tablespace for users, and then drop the old tablespace. However, there is a disadvantage that the procedure requires no active sort operations happening at the time of dropping old temporary tablespace. Starting with Oracle Database 11g Release 1, you can use the ALTER TABLESPACE SHRINK SPACE command to shrink a temporary tablespace, or you can use the ALTER TABLESPACE SHRINK TEMPFILE command to shrink one tempfile. For both commands, you can specify the optional KEEP clause that defines the lower bound that the tablespace/tempfile can be shrunk to. If you omit the KEEP clause, then the database attempts to shrink the tablespace/tempfile as much as possible (total space of all currently used extents) as long as other storage attributes are satisfied. This operation is done online. However, if some currently used extents are allocated above the shrink estimation, the system waits until they are released to finish the shrink operation. Note: The ALTER DATABASE TEMPFILE RESIZE command generally fails with ORA because the tempfile contains used data beyond requested RESIZE value. As opposed to ALTER TABLESPACE SHRINK, the ALTER DATABASE command does not try to deallocate sort extents after they are allocated. ALTER TABLESPACE temp SHRINK SPACE [KEEP 200m]; ALTER TABLESPACE temp SHRINK TEMPFILE 'tbs_temp.dbf';
577
DBA_TEMP_FREE_SPACE Lists temporary space usage information
Central point for temporary tablespace space usage Column name Description TABLESPACE_NAME Name of the tablespace TABLESPACE_SIZE Total size of the tablespace, in bytes ALLOCATED_SPACE Total allocated space, in bytes, including space that is currently allocated and used and space that is currently allocated and available for reuse FREE_SPACE Total free space available, in bytes, including space that is currently allocated and available for reuse and space that is currently unallocated DBA_TEMP_FREE_SPACE This dictionary view reports temporary space usage information at the tablespace level. The information is derived from various existing views.
578
Tablespace Option for Creating Temporary Table
Specify which temporary tablespace to use for your global temporary tables. Decide a proper temporary extent size. CREATE TEMPORARY TABLESPACE temp TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m; CREATE GLOBAL TEMPORARY TABLE temp_table (c varchar2(10)) ON COMMIT DELETE ROWS TABLESPACE temp; Tablespace Option for Creating Temporary Table Starting with Oracle Database 11g Release 1, it becomes possible to specify a TABLESPACE clause when you create a global temporary table. If no tablespace is specified, the global temporary table is created in your default temporary tablespace. In addition, indexes created on the temporary table are also created in the same temporary tablespace as the temporary table. This possibility allows you to decide a proper extent size that reflects your sort-specific usage, especially when you have several types of temporary space usage. Note: You can find in DBA_TABLES which tablespace is used to store your global temporary tables.
579
Easier Recovery from Loss of SPFILE
The FROM MEMORY clause allows the creation of current systemwide parameter settings. CREATE PFILE [= 'pfile_name' ] FROM { { SPFILE [= 'spfile_name'] } | MEMORY } ; CREATE SPFILE [= 'spfile_name' ] FROM { { PFILE [= 'pfile_name' ] } | MEMORY } ; Easier Recovery from Loss of SPFILE In Oracle Database 11g, the FROM MEMORY clause creates a pfile or spfile using the current systemwide parameter settings. In a RAC environment, the created file contains the parameter settings from each instance. During instance startup, all parameter settings are logged to the alert.log file. As of Oracle Database 11g, the alert.log parameter dump text is written in valid parameter syntax. This facilitates cutting and pasting of parameters into a separate file, and then using as a pfile for a subsequent instance. The name of the pfile or spfile is written to the alert.log at instance startup time. In cases when an unknown client-side pfile is used, the alert log indicates this as well. To support this additional functionality, the COMPATIBLE initialization parameter must be set to or higher.
580
Summary In this lesson, you should have learned how to:
Describe enhancements to locking mechanisms Use the SQL query result cache Use the enhanced PL/SQL recompilation mechanism Create and use invisible indexes Describe Adaptive Cursor Sharing Manage your SPFILE
581
Practice 16: Overview This practice covers the following topics:
Using the client result cache Using the PL/SQL result cache
582
Installation and Upgrade Enhancements
583
Manual Upgrade
584
Performing a Manual Upgrade: 1
Install Oracle Database 11g, Release 1 in a new ORACLE_HOME. Analyze the existing database: Use rdbms/admin/utlu111i.sql with the existing server. SQL> spool pre_upgrade.log Adjust redo logs and tablespace sizes if necessary. Copy existing initialization files to the new ORACLE_HOME and make recommended adjustments. Shut down immediately, back up, and then switch to the new ORACLE_HOME.
585
Performing a Manual Upgrade: 2
Start up using the Oracle Database 11g, Release 1 server: SQL> startup upgrade If you are upgrading from 9.2, create a SYSAUX tablespace: SQL> create tablespace SYSAUX datafile 'e:\oracle\oradata\empdb\sysaux01.dbf' size 500M reuse extent management local segment space management auto online; Run the upgrade (automatically shuts down database): SQL> spool upgrade.log
586
Performing a Manual Upgrade: 3
Restart the database instance in normal mode: SQL> startup Run the Post-Upgrade Status Tool to display the results of the upgrade: Run post-upgrade actions: Recompile and revalidate any remaining application objects: (parallel compile on multiprocessor system) Note catuppst.sql is the post-upgrade script that performs the remaining upgrade actions that do not require the database to be open in UPGRADE mode. It can be run at the same time that utlrp.sql is being run.
587
Downgrading a Database
588
Downgrading a Database: 1
Major release downgrades are supported back to 10.2 and 10.1. Downgrade to only the release from which you upgraded. Shut down and start up the instance in DOWNGRADE mode: SQL> startup downgrade Run the downgrade script, which automatically determines the version of the database and calls the specific component scripts: SQL> SPOOL downgrade.log Shut down the database immediately after the downgrade script ends: SQL> shutdown immediate;
589
Downgrading a Database: 2
Move to the old ORACLE_HOME environment and start up the database in the upgrade mode: SQL> startup upgrade Reload the old packages and views: SQL> SPOOL reload.sql Shut down and restart the instance for normal operation: SQL> shutdown immediate; SQL> startup Run utlrp.sql to recompile all existing packages, procedures, and types that were previously in an INVALID state: Perform any necessary post-downgrade tasks.
590
Best Practices: 1 The three Ts: Test, Test, Test Functional testing
Test the upgrade. Test the application(s). Test the recovery strategy. Functional testing Clone your production database on a machine with similar resources. Use DBUA for your upgrade. Run your application and tools to ensure that they work. Best Practices: 1 Perform the planned tests on the current database and on the test database that you upgraded to Oracle Database 11g, Release 1 (11.1). Compare the results and note anomalies. Repeat the test upgrade as many times as necessary. Test the newly upgraded test database with the existing applications to verify that they operate properly with a new Oracle database. You might also want to test enhanced functions by adding the available Oracle Database features. However, first make sure that the applications operate in the same manner as they did in the current database. Functional testing is a set of tests in which new and existing features and functions of the system are tested after the upgrade. Functional testing includes all database, networking, and application components. The objective of functional testing is to verify that each component of the system functions as it did before upgrading and to verify that the new functions are working properly. Create a test environment that does not interfere with the current production database. Practice upgrading the database using the test environment. The best upgrade test, if possible, is performed on an exact copy of the database to be upgraded, rather than on a downsized copy or test data. Do not upgrade the actual production database until after you successfully upgrade a test subset of this database and test it with applications (as described in the next step). The ultimate success of your upgrade depends heavily on the design and execution of an appropriate backup strategy.
591
Best Practices: 2 Performance analysis Pre-upgrade analysis
Gather performance metrics prior to upgrade: Gather AWR or Statspack baselines during various workloads. Gather sample performance metrics after upgrade: Compare metrics before and after upgrade to catch issues. Upgrade production systems only after performance and functional goals have been met. Pre-upgrade analysis You can run DBUA without clicking Finish to get a pre-upgrade analysis or utlu111i.sql. Read general and platform-specific release notes to catch special cases. Best Practices: 2 Performance testing of the new Oracle database compares the performance of various SQL statements in the new Oracle database with the statements’ performance in the current database. Before upgrading, you should understand the performance profile of the application under the current database. Specifically, you should understand the calls that the application makes to the database server. For example, if you are using Oracle Real Application Clusters and you want to measure the performance gains realized from using cache fusion when you upgrade to Oracle Database 11g, Release 1 (11.1), then make sure that you record your system’s statistics before upgrading. For that, you can use various V$ views or AWR/Statspack reports.
592
Best Practices: 3 Automate your upgrade: Logging
Use DBUA in command-line mode to automate your upgrade. Useful for upgrading a large number of databases Logging For manual upgrade, spool upgrade results and check logs for possible issues. DBUA can also do this for you. Automatic conversion from 32-bit to 64-bit database software Check for sufficient space in SYSTEM, UNDO, TEMP, and redo log files. Best Practices: 3 If you are installing the 64-bit Oracle Database 11g, Release 1 (11.1) software but were previously using a 32-bit Oracle Database installation, then the database is automatically converted to 64-bit during a patch release or major release upgrade to Oracle Database 11g, Release 1 (11.1). However, you must increase the initialization parameters affecting the system global area, such as sga_target and shared_pool_size, to support the 64-bit operation.
593
Best Practices: 4 Use Optimal Flexible Architecture (OFA)
Offers best practices for locating your database files, configuration files, and ORACLE_HOME Use new features: Migrate to CBO from RBO. Automatic management features for SGA, Undo, PGA, and so on. Use AWR/ADDM to diagnose performance issues. Consider using SQL Tuning Advisor. Change COMPATIBLE and OPTIMIZER_FEATURES_ENABLE parameters to enable new optimizer features. Best Practices: 4 Oracle recommends the Optimal Flexible Architecture (OFA) standard for your Oracle Database installations. The OFA standard is a set of configuration guidelines for efficient and reliable Oracle databases that require little maintenance. OFA provides the following benefits: Organizes large amounts of complicated software and data on disk to avoid device bottlenecks and poor performance Facilitates routine administrative tasks, such as software and data backup functions, which are often vulnerable to data corruption Alleviates switching among multiple Oracle databases Adequately manages and administers database growth Helps to eliminate fragmentation of free space in the data dictionary, isolates other fragmentation, and minimizes resource contention If you are not currently using the OFA standard, switching to the OFA standard involves modifying your directory structure and relocating your database files.
594
Best Practices: 5 Use Enterprise Manager Grid Control to manage your enterprise: Use EM to set up new features and try them out. EM provides complete manageability solution for databases, applications, storage, security, and networks. Collect object and system statistics to improve plans generated by the CBO. Check for invalid objects in the database before upgrading: SQL> select owner, object_name, object_type, status from dba_objects where status<>'VALID'; Best Practices: 5 When you upgrade to Oracle Database 11g, Release 1 (11.1), optimizer statistics are collected for dictionary tables that lack statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but statistics gathering occurs only for those tables that lack statistics or are significantly changed during the upgrade. To decrease the amount of down time incurred when collecting statistics, you can collect statistics prior to performing the actual database upgrade. As of Oracle Database 10g, Release 1 (10.1), Oracle recommends that you also use the DBMS_STATS.GATHER_DICTIONARY_STATS procedure to gather dictionary statistics in addition to database component statistics such as SYS, SYSMAN, XDB, and so on using the DBMS_STATS.GATHER_SCHEMA_STATS procedure.
595
Best Practices: 6 Avoid upgrading in a crisis:
Keep up with security alerts. Keep up with critical patches needed for your applications. Keep track of the de-support schedules. Always upgrade to the latest supported version of the RDBMS. Make sure patchset is available for all your platforms. Data Vault option needs to be turned off for upgrade. Best Practices: 6 If you have enabled Oracle Database Vault, you must disable it before upgrading the database. Then enable it again when the upgrade is finished.
596
Security New Features
597
Objectives After completing this lesson, you should be able to:
Configure the password file to use case-sensitive passwords Encrypt a tablespace Configure fine-grained access to network services
598
Secure Password Support
Passwords in Oracle Database 11g: Are case-sensitive Contain more characters Use more secure hash algorithm Use salt in the hash algorithm Usernames are still Oracle identifiers (up to 30 characters, non-case-sensitive) Secure Password Support You must use more secure passwords to meet the demands of compliance to various security and privacy regulations. Passwords that are very short and passwords that are formed from a limited set of characters are susceptible to brute force attacks. Longer passwords with more different characters allowed make the password much more difficult to guess or find. In Oracle Database 11g, the password is handled differently than in previous versions: Passwords are case-sensitive. Uppercase and lowercase characters are now different characters when used in a password. A password may contain multibyte characters without it being enclosed in quotation marks. A password must be enclosed in quotation marks if it contains any special characters apart from $, _, or #. Passwords are always passed through a hash algorithm, and then stored as a user credential. When the user presents a password, it is hashed and then compared to the stored credential. In Oracle Database 11g, the hash algorithm is SHA-1 of the public algorithm used in previous versions of the database. SHA-1 is a stronger algorithm using a 160-bit key. Passwords always use salt. A hash function always produces the same output, given the same input. Salt is a unique (random) value that is added to the input to ensure that the output credential is unique.
599
Automatic Secure Configuration
Default password profile Default auditing Built-in password complexity checking Automatic Secure Configuration Oracle Database 11g installs and creates the database with certain security features recommended by the Center for Internet Security (CIS) benchmark. The CIS recommended configuration is more secure than the 10gR2 default installation; yet open enough to allow the majority of applications to be successful. Many customers have adopted this benchmark already. There are some recommendations of the CIS benchmark that may be incompatible with some applications.
600
Password Configuration
By default: Default password profile is enabled Account is locked after 10 failed login attempts In upgrade: Passwords are non-case-sensitive until changed Passwords become case-sensitive when the ALTER USER command is used On creation: Passwords are case-sensitive Secure Default Configuration When creating a custom database using the Database Configuration Assistant (DBCA), you can specify the Oracle Database 11g default security configuration. By default, if a user tries to connect to an Oracle instance multiple times using an incorrect password, the instance delays each login after the third try. This protection applies for attempts made from different IP addresses or multiple client connections. Later, it gradually increases the time before the user can try another password, up to a maximum of about ten seconds. The default password profile is enabled with these settings at database creation: PASSWORD_LIFE_TIME 180 PASSWORD_GRACE_TIME 7 PASSWORD_REUSE_TIME UNLIMITED PASSWORD_REUSE_MAX UNLIMITED FAILED_LOGIN_ATTEMPTS 10 PASSWORD_LOCK_TIME 1 PASSWORD_VERIFY_FUNCTION NULL When an Oracle Database 10g database is upgraded, passwords are non-case-sensitive until the ALTER USER… command is used to change the password. When the database is created, the passwords will be case-sensitive by default.
601
Enable Built-in Password Complexity Checker
Execute the utlpwdmg.sql script to create the password verify function: Alter the default profile: SQL> CONNECT / as SYSDBA ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION verify_function_11g; Enable Built-in Password Complexity Checker The verify_function_11g is a sample PL/SQL function that can be easily modified to enforce the password complexity policies at your site. This function does not require special characters to be embedded in the password. Both verify_function_11g and the older verify_function are included in the utlpwdmg.sql file. To enable the password complexity checking, create a verification function owned by SYS. Use one of the supplied functions or modify one of them to meet your requirements. The example shows using the utlpwdmg.sql script. If there is an error in the password complexity check function named in the profile or it does not exist, you cannot change passwords nor create users. The solution is to set the PASSWORD_VERIFY_FUNCTION to NULL in the profile, until the problem is solved. The verify_function11g function checks that the password contains at least eight characters, contains at least one number and one alphabetic character, and differs from the previous password by at least three characters. The function also checks that the password is not a username or username appended with any number 1–100; a username reversed; a server name or server name appended with 1–100; or one of a set of well-known and common passwords such as “welcome1,” “database1,” “oracle123,” or “oracle (appended with 1–100),” and so on.
602
Managing Default Audits
Review audit logs: Default audit options cover important security privileges Archive audit records Export Copy to another table Remove archived audit records Managing Default Audits Review the audit logs: By default, auditing is enabled in Oracle Database 11g for certain privileges that are very important to security. The audit trail is recorded in the database AUD$ table by default; the AUDIT_TRAIL parameter is set to DB. These audits should not have a large impact on database performance, for most sites. Oracle recommends the use of OS audit trail files. Archive audit records: To retain audit records export using Oracle Data Pump Export, or use the SELECT statement to capture a set of audit records into a separate table. Remove archived audit records: Remove audit records from the SYS.AUD$ table after reviewing and archiving them. Audit records take up space in the SYSTEM tablespace. If the SYSTEM tablespace cannot grow, and there is no more space for audit records, errors will be generated for each audited statement. Because CREATE SESSION is one of the audited privileges, no new sessions may be created except by a user connected as SYSDBA. Archive the audit table with the Export utility using the QUERY option to specify the WHERE clause with a range of dates or SCNs. Then delete the records from the audit table using the same WHERE clause. When AUDIT_TRAIL=OS, separate files are created for each audit record in the directory specified by AUDIT_FILE_DEST. All files as of a certain time can be copied, and then removed. Note: The SYSTEM tablespace is created with the autoextend on option. So the SYSTEM tablespace will grow as needed until there is no more space available on the disk.
603
Notes only Managing Default Audits (continued)
The following privileges are audited for all users on success and failure, and by access: CREATE EXTERNAL JOB CREATE ANY JOB GRANT ANY OBJECT PRIVILEGE EXEMPT ACCESS POLICY CREATE ANY LIBRARY GRANT ANY PRIVILEGE DROP PROFILE ALTER PROFILE DROP ANY PROCEDURE ALTER ANY PROCEDURE CREATE ANY PROCEDURE ALTER DATABASE GRANT ANY ROLE CREATE PUBLIC DATABASE LINK DROP ANY TABLE ALTER ANY TABLE CREATE ANY TABLE DROP USER ALTER USER CREATE USER CREATE SESSION AUDIT SYSTEM ALTER SYSTEM Notes only
604
Adjust Security Settings
Need Beta 5 Screenshot Adjust Security Settings When you create a database using the DBCA tool, you are offered a choice of security settings: Keep the enhanced 11g default security settings (recommended). These settings include enabling auditing and new default password profile. Revert to pre-11g default security settings. To disable a particular category of enhanced settings for compatibility purposes, choose from the following: Revert audit settings to pre-11g defaults Revert password profile settings to pre-11g defaults. These settings can also be changed after the database is created using the DBCA. Some applications may not work properly under the 11g default security settings. Secure permissions on software are always set. It is not impacted by a user’s choice for the “Security Settings” option.
605
Setting Security Parameters
Use case-sensitive passwords SEC_SEC_CASE_SENSITIVE_LOGON Protect against DoS attacks SEC_PROTOCOL_ERROR_FURTHER_ACTION SEC_PROTOCOL_ERROR_TRACE_ACTION Protect against brute force attacks SEC_MAX_FAILED_LOGIN_ATTEMPTS Setting Security Parameters A set of new parameters have been added to the Oracle Database 11g to enhance the default security of the database. These parameters are systemwide and static. Use Case-Sensitive Passwords to Improve Security A new parameter, SEC_CASE_SENSITIVE_LOGON, allows you to set the case-sensitivity of user passwords. Oracle recommends that you retain the default setting of TRUE. You can specify case insensitive passwords for backward compatibility by setting this parameter to FALSE: ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE Note: Disabling case sensitivity increases vulnerability to brute force attacks. Protect Against Denial of Service (DoS) Attacks The two parameters shown specify the actions to be taken when the database receives bad packets from a client. The assumption is that the bad packets are from a possible malicious client. The SEC_PROTOCOL_ERROR_FURTHER_ACTION parameter specifies what action is to be taken with the client connection: continue, drop the connection, or delay accepting requests. The other parameter, SEC_PROTOCOL_ERROR_TRACE_ACTION, specifies a monitoring action: NONE, TRACE, LOG, or ALERT.
606
Notes only page Setting Security Parameters (continued)
Protect Against Brute Force Attacks A new initialization parameter, SEC_MAX_FAILED_LOGIN_ATTEMPTS, that has a default setting of 10 causes a connection to be automatically dropped after the specified number of attempts. This parameter is enforced even when the password profile is not enabled. This parameter prevents a program from making a database connection and then attempting to authenticate by trying hundreds or thousands of passwords. Notes only page
607
Setting Database Administrator Authentication
Use password file with case-sensitive passwords. Enable strong authentication for administrator roles: Grant the administrator role in OID. Use Kerberos tickets. Use certificates with SSL. Setting Database Administrator Authentication The database administrator must always be authenticated. In Oracle Database 11g, there are new methods that make administrator authentication more secure and centralize the administration of these privileged users. Case-sensitive passwords have also been extended to remote connections for privileged users. You can override this default behavior with the following command: orapwd file=orapworcl entries=5 ignorecase=Y If your concern is that the password file might be vulnerable or that the maintenance of many password files is a burden, then strong authentication can be implemented: Grant SYSDBA, or SYSOPER enterprise role in Oracle Internet Directory (OID). Use Kerberos tickets. Use certificates over SSL. To use any of the strong authentication methods, the LDAP_DIRECTORY_SYSAUTH initialization parameter must be set to YES. Set this parameter to NO to disable the use of strong authentication methods. Authentication through OID or through Kerberos also can provide centralized administration or single sign-on. If the password file is configured, it is checked first. The user may also be authenticated by the local OS by being a member of the OSDBA or OSOPER groups. For more information, see the Oracle Database Advanced Security Administrator’s Guide 11g Release 1.
608
Set Up Directory Authentication for Administrative Users
Create the user in the directory. Grant the SYSDBA or SYSOPER enterprise role to user. Set the LDAP_DIRECTORY_SYSAUTH parameter in the database. Check whether the LDAP_DIRECTORY_ACCESS parameter is set to PASSWORD or SSL. Test the connection. $sqlplus AS SYSDBA Set Up Directory Authentication for Administrative Users To enable Oracle Internet Directory (OID) server to authorize SYSDBA and SYSOPER connections: 1. Configure the administrative user by using the same procedures you would use to configure a typical user. 2. In OID, grant the SYSDBA or SYSOPER enterprise role to the user for the database the user will administer. 3. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method. 4. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is not set to NONE. The possible values are PASSWORD or SSL. 5. Later, the administrative user can log in by including the net service name in the CONNECT statement. For example, for Fred to log in as SYSDBA if the net service name is orcl: CONNECT AS SYSDBA Note: If the database is configured to use a password file for remote authentication, the password file will be checked first.
609
Set Up Kerberos Authentication for Administrative Users
Create the user in the Kerberos domain. Configure OID for Kerberos authentication. Grant the SYSDBA or SYSOPER enterprise role to the user in OID. Set the LDAP_DIRECTORY_SYSAUTH parameter in the database. Set the LDAP_DIRECTORY_ACCESS parameter. Test the connection. $sqlplus AS SYSDBA Set Up Kerberos Authentication for Administrative Users To enable Kerberos to authorize SYSDBA and SYSOPER connections: 1. Configure the administrative user by using the same procedures you would use to configure a typical user. For more information about configuring Kerberos authentication, see the Oracle Database Advanced Security Administrator’s Guide 11g. 2. Configure OID for Kerberos authentication. See the Oracle Database Enterprise User Administrator’s Guide 11g Release 1 3. In OID, grant the SYSDBA or SYSOPER enterprise role to the user for the database the user will administer. 4. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method. 5. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is not set to NONE. This will be set to either PASSWORD or SSL. 6. Later, the administrative user can log in by including the net service name in the CONNECT statement. For example, to log in as SYSDBA if the net service name is orcl: CONNECT AS SYSDBA
610
Set Up SSL Authentication for Administrative Users
Configure client to use SSL. Configure server to use SSL. Configure OID for SSL user authentication. Grant SYSOPER or SYSDBA to the user. Set the LDAP_DIRECTORY_SYSAUTH parameter in the database. Test the connection. $sqlplus AS SYSDBA Set Up SSL Authentication for Administrative Users To enable SYSDBA and SYSOPER connections using certificates and SSL (for more information about configuring SSL authentication, see the Oracle Database Advanced Security Administrator’s Guide 11g): 1. Configure the client to use SSL Set up client wallet and user certificate. Update wallet location in sqlnet.ora. Configure Oracle net service name to include server-distinguished names and use TCP/IP with SSL in tnsnames.ora. Configure TCP/IP with SSL in listener.ora. Set the client SSL cipher suites and required SSL version; set SSL as an authentication service in sqlnet.ora. 2. Configure the server to use SSL: Enable SSL for your database listener on TCPS and provide a corresponding TNS name. Stored your database PKI credentials in the database wallet. Set the LDAP_DIRECTORY_ACCESS initialization parameter to SSL. 3. Configure OID for SSL user authentication. See the Oracle Database Enterprise User Administrator’s Guide 11g Release 1. 4. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer.
611
Set Up SSL Authentication for Administrative Users (continued)
5. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method. 6. Later, the administrative user can log in by including the net service name in the CONNECT statement. For example, to log in as SYSDBA if the net service name is orcl: CONNECT AS SYSDBA Notes only page
612
Transparent Data Encryption
New features in TDE include: Tablespace Encryption Support for LogMiner Support for Logical Standby Support for Streams Support for Asynchronous Change Data Capture Hardware-based master key protection Transparent Data Encryption Several new features enhance the capabilities of Transparent Data Encryption (TDE), and build on the same infrastructure. The changes in LogMiner to support TDE provide the infrastructure for change capture engines used for Logical Standby, Streams, and Asynchronous Change Data Capture. For LogMiner to support TDE, it must be able to access the encryption wallet. To access the wallet, the instance must be mounted and the wallet open. LogMiner does not support Hardware Security Module (HSM) or user-held keys. For Logical Standby, the logs may be mined either on the source or the target database, thus the wallet must be the same for both databases. Encrypted columns are handled the same way in both Streams and the Streams-based Change Data Capture. The redo records are mined at the source, where the wallet exists. The data is transmitted unencrypted to the target and encrypted using the wallet at the target. The data can be encrypted in transit by using Advanced Security Option to provide network encryption.
613
Using Tablespace Encryption
Create an encrypted tablespace. Create or open the encryption wallet: Create a tablespace with the encryption keywords: SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "welcome1"; SQL> CREATE TABLESPACE encrypt_ts 2> DATAFILE '$ORACLE_HOME/dbs/encrypt.dat' SIZE 100M 3> ENCRYPTION USING '3DES168' 4> DEFAULT STORAGE (ENCRYPT); Tablespace Encryption Tablespace encryption is based on block-level encryption that encrypts on write and decrypts on read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The SQL access paths are unchanged and all data types are supported. To use tablespace encryption, the encryption wallet must be open. The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view the properties in the V$ENCRYPTED_TABLESPACES view. The encrypted data is protected during operations such as JOIN and SORT. This means that the data is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected. Encrypted tablespaces are transportable if the platforms have same endianess and the same wallet. Restrictions Temporary and undo tablespaces cannot be encrypted. (Selected blocks are encrypted.) Bfiles and external tables are not encrypted. Transportable tablespaces across different endian platforms are not supported. The key for an encrypted tablespaces cannot be changed at this time. A workaround is: create a tablespace with the desired properties and move all objects to the new tablespace.
614
TDE and LogMiner LogMiner supports TDE-encrypted columns.
Restrictions: The wallet holding the TDE master keys must be open. Hardware Security Modules are not supported. User-held keys are not supported. TDE and LogMiner With Transparent Data Encryption (TDE), the encrypted column data is encrypted in the data files, the undo segments and the redo logs. Oracle Logical Standby depends on the LogMiner’s ability to transform redo logs into SQL statements for SQL Apply. LogMiner has been enhanced to support TDE. This enhancement provides the ability to support TDE on a logical standby database. The wallet containing the master keys for TDE must be open for LogMiner to decrypt the encrypted columns. The database instance must be mounted to open the wallet; therefore, LogMiner cannot populate V$LOGMNR_CONTENTS to support TDE if the database instance is not mounted. LogMiner populates V$LOGMNR_CONTENTS for tables with encrypted columns, displaying the column data unencrypted for rows involved in DML statements. Note that this is not a security violation: TDE is a file-level encryption feature and not an access control feature. It does not prohibit DBAs from looking at encrypted data. At Oracle Database 11g, LogMiner does not support TDE with hardware security module (HSM) for key storage. User-held keys for TDE are PKI public and private keys supplied by the user for TDE master keys. User-held keys are not supported by LogMiner.
615
TDE and Logical Standby
Logical standby database with TDE: Wallet on the standby is a copy of the wallet on the primary. Master key may be changed only on the primary. Wallet open and close commands are not replicated. Table key may be changed on the standby. Table encryption algorithm may be changed on the standby. TDE and Logical Standby The same wallet is required for both databases. The wallet must be copied from the primary database to the standby database every time the master key has been changed using alter system set encryption key identified by <wallet_password>. An error is raised if the DBA attempts to change the master key on the standby database. If auto-login wallet is not used, the wallet must be opened on the standby. Wallet open and close commands are not replicated on standby. A different password can be used to open the wallet on the standby. The wallet owner can change the password to be used for the copy of the wallet on the standby. The DBA has the ability to change the encryption key or the encryption algorithm of a replicated table at the logical standby. This does not require a change to the master key or wallet. This operation is performed with: ALTER TABLE table_name REKEY USING '3DES168'; There can be only one algorithm per table. Changing the algorithm at the table changes the algorithm for all the columns. A column on the standby can have a different algorithm than the primary or no encryption. To change the table key, the guard setting must be lowered to NONE. TDE can be used on local tables in the logical standby independent of the primary, if encrypted columns are not replicated into the standby.
616
TDE and Streams Oracle Streams now provides the ability to transparently: Decrypt values protected by TDE for filtering and processing Reencrypt values so that they are never in clear text while on disk Capture Staging Apply TDE and Streams In Oracle Database 11g, Oracle Streams supports TDE. Oracle Streams now provides the ability to transparently: Decrypt values protected by TDE for filtering, processing and so on Reencrypt values so that they are never in clear text while on disk (as opposed to memory) If the corresponding column in the apply database has TDE support, the applied data is transparently reencrypted using the local database’s keys. If the column value was encrypted at the source, and the corresponding column in the apply database is not encrypted, the apply process raises an error unless the apply parameter ENFORCE_ENCRYPTION is set to FALSE. Whenever logical change records (LCRs) are stored on disk, such as due to queue or apply spilling and apply error creation, the data is encrypted if the local database supports TDE. This is performed transparently without any user intervention. LCR message tracing does not display clear text of encrypted column values.
617
Hardware Security Module
Encrypt and decrypt operations are performed on the hardware security module. Hardware Security Module Encrypted data Hardware Security Module A hardware security module (HSM) is a physical device that provides secure storage for encryption keys. It also provides secure computational space (memory) to perform encryption and decryption operations. HSM is a more secure alternative to the Oracle wallet. Transparent data encryption can use an HSM to provide enhanced security for sensitive data. An HSM is used to store the master encryption key used for transparent data encryption. The key is secure from unauthorized access attempts because the HSM is a physical device and not an operating system file. All encryption and decryption operations that use the master encryption key are performed inside the HSM. This means that the master encryption key is never exposed in insecure memory. There are several vendors that provide Hardware Security Modules. The vendor must supply the appropriate libraries. Client Database server
618
Using a Hardware Security Module with TDE
Configure sqlnet.ora: Copy the PKCS#11 library to the correct path. Set up the HSM. Generate a master encryption key for HSM-based encryption: Ensure that the HSM is accessible. Encrypt and decrypt data. ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM) (METHOD_DATA= (DIRECTORY=/app/oracle/admin/SID1/wallet))) ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY user_Id:password Using a Hardware Security Module with TDE Using HSM involves an initial setup of the HSM device. You also need to configure transparent data encryption to use HSM. After the initial setup is done, HSM can be used just like an Oracle software wallet. The following steps discuss configuring and using hardware security modules: 1. Set the ENCRYPTION_WALLET_LOCATION parameter in sqlnet.ora: ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM) (METHOD_DATA=(DIRECTORY=/app/oracle/admin/SID1/wallet))) The directory is required to find the old wallet when migrating from a software-based wallet. 2. Copy the PKCS#11 library to its correct path. 3. Set up the HSM per the instruction provided by the HSM vendor. A user account is required for the database to interact with the HSM. 4. Generate a master encryption key for HSM-based encryption: ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY user_Id:password [MIGRATE USING wallet_password] user_Id:password refers to the user account in step 3. The MIGRATE clause is used when the TDE is already in place. MIGRATE decrypts the existing column encryption keys and then encrypts them with the newly created, HSM-based, master encryption key. 5. Ensure that the HSM is accessible: ALTER SYSTEM SET WALLET OPEN IDENTIFIED BY user_Id:password 6. Encrypt and decrypt data as you would with a software wallet.
619
Encryption for LOB Columns
CREATE TABLE test1 (doc CLOB ENCRYPT USING 'AES128') LOB(doc) STORE AS SECUREFILE (CACHE NOLOGGING ); LOB encryption is allowed only for SECUREFILE LOBs. All LOBs in the LOB column are encrypted. LOBs can be encrypted on per-column or per-partition basis. Allows for the coexistence of SECUREFILE and BASICFILE LOBs Encryption for LOB Columns Oracle Database 11g introduces a completely reengineered large object (LOB) data type that dramatically improves performance, manageability, and ease of application development. This SecureFiles implementation (of LOBs) offers advanced, next-generation functionality such as intelligent compression and transparent encryption. The encrypted data in SecureFiles is stored in-place and is available for random reads and writes. You must create the LOB with the SECUREFILE parameter, with encryption enabled (ENCRYPT) or disabled (DECRYPT—the default) on the LOB column. The current TDE syntax is used for extending encryption to LOB data types. LOB implementation from prior versions is still supported for backward compatibility and is now referred to as BasicFiles. If you add a LOB column to a table, you can specify whether it should be created as SECUREFILES or BASICFILES. The default LOB type is BASICFILES to ensure backward compatibility. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES192. Note: For further discussion on SecureFiles, please see the lesson titled “Oracle SecureFiles.”
620
Using Kerberos Enhancements
Use stronger encryption algorithms (no action required). Interoperability between MS KDC and MIT KDC (no action required) Longer principal name: Convert a DB user to Kerberos user: CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS ALTER USER DBUSER IDENTIFIED EXTERNALLY AS Kerberos Enhancements The Oracle client Kerberos implementation now makes use of secure encryption algorithms such as 3DES and AES in place of DES. This makes using Kerberos more secure. The Kerberos authentication mechanism in the Oracle database now supports the following encryption types: DES3-CBC-SHA (DES3 algorithm in CBC mode with HMAC-SHA1 as checksum) RC4-HMAC (RC4 algorithm with HMAC-MD5 as checksum) AES128-CTS (AES algorithm with 128-bit key in CTS mode with HMAC-SHA1 as checksum) AES256-CTS (AES algorithm with 256-bit key in CTS mode with HMAC-SHA1 as checksum) The Kerberos implementation has been enhanced to interoperate smoothly with Microsoft and MIT Key Distribution Centers. The Kerberos principal name can now contain more than 30 characters. It is no longer restricted by the number of characters allowed in a database username. If the Kerberos principal name is longer than 30 characters, use: CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS Database users can be converted to Kerberos users without requiring a new user to be created using the ALTER USER syntax: ALTER USER DBUSER IDENTIFIED EXTERNALLY AS
621
Enterprise Manager Security Management
Manage security through EM. Policy Manager replaced for: Virtual Private Database Application Context Oracle Label Security Enterprise User Security pages added TDE pages added Enterprise Manager Security Management Security management has been integrated into Enterprise Manager. The Policy Manager Java console–based tool has been superseded. Oracle Label Security, Application Contexts, and Virtual Private Database, previously administered through the Oracle Policy Manager tool, are managed through Enterprise Manager. The Oracle Policy Manager tool is still available. The Enterprise Manager Security tool has been superseded by Enterprise Manager features. Enterprise User Security is also now managed though Enterprise Manager. The menu item for Enterprise Manager appears as soon as the ldap.ora file is configured. See the Enterprise User Administrator’s Guide for configuration details. The Enterpriser Security Manager tool is still available. Transparent Data Encryption can now be managed through Enterprise Manager, including wallet management. You can create, open, and close the wallet from Enterprise Manager pages.
622
Managing TDE with Enterprise Manager
The administrator using Enterprise Manager can open and close the wallet, move the location of the wallet, and generate a new master key. The example in the slide shows that TDE options are part of the Create or Edit Table processes. Table encryption options allow you to choose the encryption algorithm and salt. The table key can also be reset. The other place where TDE changed the management pages is Export and Import Data. If TDE is configured, the wallet is open, and the table to export has encrypted columns, then the export wizard will offer data encryption. The same arbitrary key (password) that was used on export must be provided on import in order to import any encrypted columns. A partial import that does not include tables that contain encrypted columns does not require the password.
623
Managing Tablespace Encryption with Enterprise Manager
You can manage tablespace encryption from the same console as you manage Transparent Database Encryption. After encryption has been enabled for the database, the DBA can set the encryption property of a tablespace on the Create Tablespace page.
624
Managing Virtual Private Database
With Enterprise Manager 11g, you can now manage the Virtual Private Database policies from the console. You can enable, disable, add, and drop polices. The console also allows you to manage application contexts. The application context page is not shown.
625
Managing Label Security with Enterprise Manager
Managing Label Security with Database Control Oracle Label Security (OLS) Management is integrated with Enterprise Manager Database Control. The Database Administrator can manage OLS from the same console that is used for managing the database instances, listeners, and host. The differences between database control and grid control are minimal. Oracle Label Security (OLS) Management is integrated with Enterprise Manager Grid control. The Database Administrator can manage OLS from the same console that is used for managing the database instances, listeners, and other targets.
626
Managing Label Security with Oracle Internet Directory
Label Security with OID Oracle Label Security policies can now be created and stored in Oracle Internet Directory, and then applied to one or more databases. A database will subscribe to a policy making the policy available to the database, and the policy can be applied to tables and schemas in the database. Label authorizations can be assigned to enterpriser users in the form of profiles.
627
Managing Enterprise Users with Enterprise Manager
Enterprise Users/Enterprise Manager The functionality of the Enterprise Security Manager has been integrated into Enterprise Manager. Enterprise Manager allows you to create and configure enterprise domains, enterprise roles, user schema mappings and proxy permissions. Databases can be configured for enterprise user security after they have been registered with OID. The registration is performed through the DBCA tool. Enterprise users and groups can also be configured for enterprise user security. The creation of enterprise users and groups can be done through Delegated Administration Service (DAS). Administrators for the database can be created and given the appropriate roles in OID through Enterprise Manager. Enterprise Manager allows you to manage enterprise users and roles, schema mappings, domain mappings, and proxy users.
628
Enterprise Manager Policy Trend
With Enterprise Manager Policy Trend, you can view the compliance of your database configuration against a set of Oracle Security best practices.
629
Managing Enterprise Users with Enterprise Manager
The functionality of the Enterprise Security Manager has been integrated into Enterpriser Manager. Enterprise Users can be created and configured. Databases can be configured for enterprise user security after they have been registered with OID. The registration is performed through the DBCA tool. Administrators for the database can be created and given the appropriate roles in OID through Enterprise Manager. Enterpriser Manager allows you to manage enterprise users and roles, schema mappings, domain mappings, and proxy users.
630
Oracle Audit Vault Enhancements
Audit Vault enhancements to Streams: Harden Streams configuration DML/DDL capture on SYS and SYSTEM schemas Capture changes to SYS.AUD$ and SYS.FGA_LOG$ Oracle Audit Vault Enhancements Oracle Audit Vault provides auditing in a heterogeneous environment. Audit Vault consists of a secure database to store and analyze audit information from various sources such as databases, OS audit trails, and so on. Oracle Streams is an asynchronous information-sharing infrastructure that facilitates sharing of events within a database or from one database to another. Events could be DML or DDL changes happening in a database. These events are captured by Streams implicit capture and are propagated to a queue in a remote database where they are consumed by a subscriber, which is typically the Streams apply process. Oracle Streams has been enhanced to support Audit Vault. The Streams configurations are controlled from the Audit Vault location. After the initial configuration has been completed, the Streams setup at both the Audit Source and Audit Vault will be completely driven from the Audit Vault. This prevents configurations from being changed at the Audit Source. Oracle Streams has been enhanced to allow capture of changes to the SYS and SYSTEM schemas. Oracle Streams already captures for user schemas all DML on participating tables and all DDL to the database. Streams is enhanced to capture the events that change the database audit trail, forwarding that information to Audit Vault.
631
Using RMAN Security Enhancements
Configure backup shredding: Using backup shredding: RMAN> CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON; RMAN> DELETE FORCE; Using RMAN Security Enhancements Backup shredding is a key management feature that allows the DBA to delete the encryption key of transparent encrypted backups, without physical access to the backup media. The encrypted backups are rendered inaccessible if the encryption key is destroyed. This does not apply to password-protected backups. Configure backup shredding with: CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON; Or SET ENCRYPTION EXTERNAL KEY STORAGE ON; The default setting is OFF, and backup shredding is not enabled. To shred a backup, no new command is needed; simply use: DELETE FORCE;
632
Managing Fine-Grained Access to External Network Services
Create an ACL and its privileges: BEGIN DBMS_NETWORK_ACL_ADMIN.CREATE_ACL ( acl => 'us-oracle-com-permissions.xml', description => ‘Permissions for oracle network', principal => ‘SCOTT', is_grant => TRUE, privilege => 'connect'); END; Managing Fine-Grained Access to External Network Services The network utility family of PL/SQL packages such as UTL_TCP, UTL_INADDR, UTL_HTTP, UTL_SMTP, and UTL_MAIL allow Oracle users to make network callouts from the database using raw TCP or using higher-level protocols built on raw TCP. A user either did or did not have the EXECUTE privilege on these packages and there was no control over which network hosts were accessed. The new package DBMS_NETWORK_ACL_ADMIN allows fine-grained control using access control lists (ACL) implemented by XML DB. 1. Create an access control list (ACL): The ACL is a list of users and privileges held in an XML file. The XML document named in the acl parameter is relative to the /sys/acl/ folder in the XML DB. In the example, SCOTT is granted connect. The username is case-sensitive in the ACL and must match the username of the session. There are only resolve and connect privileges. The connect privilege implies resolve. Optional parameters can specify a start and end time stamp for these privileges. To add more users and privileges to this ACL, use the ADD_PRIVILEGE procedure.
633
Managing Fine-Grained Access to External Network Services
Assign an ACL to one or more network hosts: BEGIN DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL ( acl => ‘us-oracle-com-permissions.xml', host => ‘*.us.oracle.com', lower_port => 80, upper_port => null); END Managing Fine-Grained Access to External Network Services (continued) 2. Assign an ACL to one or more network hosts: The ASSIGN_ACL procedure associates the ACL with a network host and, optionally, a port or range of ports. In the example, the host parameter allows wildcard character for the host name to assign the ACL to all the hosts of a domain. The use of wildcards affects the order of precedence for the evaluation of the ACL. Fully qualified host names with ports are evaluated before hosts with ports. Fully qualified host names are evaluated before partial domain names, and subdomains are evaluated before the top-level domain level. Multiple hosts can be assigned to the same ACL and multiple users can be added to the same ACL in any order after the ACL has been created.
634
Summary In this lesson, you should have learned how to:
Configure the password file to use case-sensitive passwords Encrypt a tablespace Configure fine-grained access to network services
635
Practice 14: Overview This practice covers the following topics:
Changing the use of case-sensitive passwords Implementing a password complexity function Encrypting a tablespace
636
Remote Jobs
637
Objectives After completing this lesson, you should be able to:
Configure remote jobs Create remote jobs
638
New Scheduler Features
Remote jobs: External jobs (OS based) Database jobs New Scheduler Features The Scheduler in Oracle Database 11g has been enhanced with the goal of unifying all scheduling and jobs functionality into one facility. This has the effect of reducing the administration of jobs (with fewer places to look for scheduled jobs) and reducing the number of background processes that start, stop, and monitor scheduled jobs. A DBA responsible for more than a few databases on multiple servers often needs to be familiar with the operating system (OS) scheduling tools to do everything required. In Oracle Database 11g, a Scheduler Agent allows the Scheduler to create jobs not only on machines where a database resides, but also on any machine where a Scheduler Agent is installed. These jobs can be external OS-based jobs or database jobs. The DBA can now administer jobs across the network from one location.
639
Remote Jobs Remote jobs: Operating system–level jobs
Schedule jobs Remote jobs: Operating system–level jobs Scripts, binaries, and so on No Oracle database required. Agent starts and manages jobs. SA Scheduler Agent (SA) Remote Jobs The Oracle Scheduler can now create and run remote jobs. The ability to run a job from a centralized scheduler on remote hosts or databases gives the DBA the tools to manage many more machines. The Oracle Scheduler Agent provides the ability to run a job against remote databases or on hosts without a database. The agent must register with one or more databases that are acting as the Scheduler source. The Scheduler source database must have the XMLDB features installed. The Scheduler must be configured to communicate with the agent. A port must be allocated and it must be unused. A password must be created for the agent to register. The DBMS_SCHEDULER.SET_ATTRIBUTES procedure enables you to specify the destination host or database by providing the host:port of the scheduler agent. Execute OS job Execute DB job
640
Configuring the Source Database
Confirm that the XMLDB is installed and get the HTTP port: Run the prvtsch.plb script: Set an agent registration password: SQL> DESC RESOURCE_VIEW SQL> SELECT DBMS_XDB.GETHTTPPORT() FROM DUAL; SQL> EXEC DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS( - > registration_password => 'my_password') Configuring the Source Database Before a database can be used as a source of remote jobs, it must have the following configuration steps completed. 1. Confirm that the XMLDB is installed. If the XMLDB is installed, the RESOURCE_VIEW view will exist. If XMLDB is not installed, use Oracle Universal Installer to install it. SQL > DESC RESOURCE_VIEW Find the HTTP port that is configured for the XML Database: SQL> SELECT DBMS_XDB.GETHTTPPORT() FROM DUAL; 2. Run the prvtsch.plb script on the source database. Connect as the SYS user. The prvtsch.plb script will be in the $ORACLE_HOME/rdbms/admin directory. SQL> CONNECT / AS SYSDBA 3. Set a registration password for the agent registration. You can limit the lifetime of this password and the number of registrations that use this password. The user who sets the password must have the MANAGE SCHEDULER privilege. The following example allows the password to be used 10 times in the next 24 hours. Oracle recommends that the password be limited to a short period of time. SQL> EXEC DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS ( - > registration_password => 'my_password',- > expiration_date => SYSTIMESTAMP + INTERVAL '1' DAY,- > max_uses => 10 )
641
Installing the Agent Installing the Agent
The agent is a separately installable component that may be installed from the Oracle Transparent Gateway media. During installation of the agent, an additional step is necessary to register with the source database and to start the agent in the background. During installation, the agent should be registered with at least one database. It is possible to automate this registration if the user is willing to include the database registration password in the installer file. This enables silent automated installs. Optional information includes: Path to install the agent into Whether to automatically start the agent Whether to set up the agent to automatically start on every computer startup If, after installation of the agent, another database is required to run jobs on the agent, the agent must be registered with that database.
642
Registering the Scheduler Agent
On the remote machine: Review the schagent.conf file. Execute the command to register the agent. Start the Scheduler Agent On Unix and Linux On Windows (install and start service) $ schagent -registerdatabase database_host database_xmldb_http_port $ schagent -start Registering the Scheduler Agent After installation, the Scheduler Agent must be registered with one more source databases. The source database is where the remote jobs will be created and where the job status information will be sent. The Scheduler Agent uses a specified port to communicate with the source database. This is the port used by the XMLDB HTTP listener. In the command to register the database, provide the host name of the machine where the schedule is running (called the source database) and the HTTP port that you found with the SELECT DBMS_XDB.GETHTTPPORT() FROM DUAL; command. schagent -registerdatabase database_host database_xmldb_http_port On Unix or Linux, start the Scheduler Agent: schagent -start On Windows, install and start the service: schagent -installagentservice C:\> schagent -installagentservice
643
Scheduler APIs to Support Remote Jobs
New DBMS_SCHEDULER procedures CREATE_CREDENTIAL DROP_CREDENTIAL SET_AGENT_REGISTRATION_PASS GET_FILE PUT_FILE Modified DBMS_SCHEDULER procedures STOP_JOB RUN_JOB Scheduler APIs to Support Remote Jobs These procedures are part of the DBMS_SCHEDULER package. The procedures in the slide are new or modified to support the remote job features. CREATE_CREDENTIAL and DROP CREDENTIAL: These procedures are used to create or drop a stored username/password pair called a credential. Credentials reside in a particular schema and can be created by any user with the CREATE JOB system privilege. To drop a public credential, the SYS schema must be explicitly given. Only a user with the MANAGE SCHEDULER system privilege is able to drop a public credential. For a regular credential, only the owner of the credential or a user with the CREATE ANY JOB system privilege is able to drop the credential. SET_AGENT_REGISTRATION_PASS: This procedure is used to set the agent registration password for a database. Remote agents must register with the database before the database can submit jobs to the agent. To prevent abuse, this password can be set to expire after a given date or after a maximum number of successful registrations. This procedure overwrites any password already set. Setting the password to NULL prevents any agent registrations. This requires the MANAGE SCHEDULER system privilege. By default, max_uses is set to 1, which means that this password can be used for only a single agent registration. Oracle recommends that an agent registration password be reset after every agent registration or after every known set of agent registrations. Oracle further recommends that this password be set to NULL if no new agents are being registered.
644
Scheduler APIs to Support Remote Jobs (continued)
GET_FILE and PUT_FILE: These procedures retrieve a file from a particular host or save a file to a particular host. They differ from the equivalent UTL_FILE procedure in that they use a specified credential and can retrieve files from remote hosts that have an execution agent installed. The caller must have the CREATE EXTERNAL JOB system privilege and have EXECUTE privileges on the credential. STOP_JOB and RUN_JOB: These procedures have been modified to stop or run remote jobs. For more information about the Scheduler APIs, see the Oracle Database PL/SQL Packages and Types Reference.
645
Dictionary Views for Remote Jobs
New views *_SCHEDULER_CREDENTIALS *_SCHEDULER_REMOTE_JOBSTATE Modified views to support remote jobs *_SCHEDULER_JOBS *_SCHEDULER_JOB_RUN_DETAILS Job_subname Dictionary Views for Remote Jobs The following dictionary views have been added: *_SCHEDULER_CREDENTIALS: Lists all regular credentials in the current user’s schema The *_SCHEDULER_JOBS views have been modified with new columns to support remote jobs: source: Global database ID of the scheduler source database destination: Global database ID of the destination database for remote database jobs credential_name: Name of the credential to be used for an external job credential_owner: Owner of the credential to be used for an external job deferred_drop: Indicates whether the job will be dropped when completed following user request (TRUE) or not (FALSE) instance_id: Instance name of the preferred instance to run the job (for jobs running on a RAC database)
646
Dictionary Views for Remote Jobs (continued)
input VARCHAR2(4000): String to be provided as a standard input to an external job environment_variables VARCHAR2(4000): Semicolon-separated list of name-value pairs to be set as environment variables for an external job login_script VARCHAR2(1000): Path to and name of the script to be executed prior to an external job *_SCHEDULER_REMOTE_JOBSTATE: Views added to show the state of enabled remote and distributed jobs on each of the destinations
647
Summary In this lesson, you should have learned how to:
Configure remote jobs Create remote jobs
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.