Download presentation
Presentation is loading. Please wait.
Published byAlberta Booker Modified over 6 years ago
1
Confidential – Oracle Internal/Restricted/Highly Restricted
2
Oracle Database 18c New Features:
Continuing Database Innovations Created and presented by: Ron Soltani, Sr. Principal Instructor Oracle University September 03, 2018 Confidential – Oracle Internal/Restricted/Highly Restricted
3
Presentation Objectives
Introduction Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Conclusion 2 3 4 5 6 Confidential – Oracle Internal/Restricted/Highly Restricted
4
Introduction Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Conclusion 2 3 4 5 6 7 Confidential – Oracle Internal/Restricted/Highly Restricted
5
Overview This présentation focuses on the new features and enhancements of Oracle Database 18c. This présentation compléments the topics covered in Oracle Database 12c New Features: The 5-day Oracle Database 12c: 12.2 New Features for 12.1 Administrators Ed 1 course Or the 10-day Oracle Database 12c R2: New Features for Administrators course Oracle Database 12c R2: New Features for Administrators Part 1 Ed 1 Oracle Database 12c R2: New Features for Administrators Part 2 Ed 1 Previous experience with Oracle Database 12c is required and, in particular, Release 2 (12.2) for a full understanding of many of the new features and enhancements. Refer to Oracle Database Database New Features Guide 18c This three-day course follows either the 5-day Oracle Database 12c: 12.2 New Features for Administrators Ed 1 course or the 10-day Oracle Database 12c Release 2 (12.2) course. Both courses are designed to introduce the new features and enhancements of Oracle Database 12c Release 2 ( ) that are applicable to the work that is usually performed by database administrators and related personnel. The course is designed to introduce the major new features and enhancements of Oracle Database 18c. You should not expect to discover in the course all the new features without supplemental reading, especially the Oracle Database New Features 12c Release 2 (12.2) and Oracle Database Database New Features 18c documentation. The course consists of instructor-led lessons, hands-on labs, and tutorials like OBEs (Oracle By Examples) that enable you to see how certain new features behave. Oracle Database 18c: New Features for Administrators
6
New Release Model Oracle delivers annual releases and quarterly release updates. Oracle delivers releases yearly instead of on a multi-year cycle. Yearly releases improve database quality by reducing the number of software changes released at one time. Quarterly Release Update (RU) + Release Update Revision (RUR) improve the quality and experience of proactive maintenance. This model: Gives best of PSUs combined with the best of Bundle Patches Allows customers to update using RUs when they need fixes, and then switch to field proven RURs when their environment becomes stable Enables customers to switch back and forth between RUs and RURs unlike PSUs and BPs Contains all important security fixes Eliminates tradeoff between security and stability Shipped in January, April, July, and October as PSUs and BPs Traditional Multi-Year Release Cycle It has become difficult to support and address the one-off patch proliferation problem. It is also not possible to switch back and forth between the various maintenance deliverable types. Yearly Release Agility Delivers new features more quickly and incrementally Eliminates multiple year wait for new functionality Allows customer-requested enhancements to be delivered more rapidly Quality Incremental changes avoid massive changes that are difficult to test and stabilize. This removes the pressure to add new features late in the release cycle to satisfy customer requests that cannot wait multiple years. Oracle Database 12c Release 1 (12.1) and Oracle Database 11g Release 1 continue to use previous PSU/BP process and version numbering. Quarterly Release Update and Release Update Revision Size and content restrictions of current Patch Set Updates (PSUs) result in greater risk of encountering a known problem, and create a proliferation of risky one-off backports. Bundle Patches have more proactive fixes to address issues with PSUs, but increase risk of regressions. The Release Update (RU) plus Release Update Revision (RUR) model provides the stability benefits of PSUs, with the proactive maintenance benefits of Bundle Patches. Oracle Database 18c: New Features for Administrators
7
Introduction Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Conclusion 1 2 3 4 5 6 7
8
Objectives After completing this module, you should be able to:
Manage a CDB fleet Manage PDB snapshots Use a dynamic container map Explain lockdown profile inheritance Describe refreshable copy PDB switchover Identify the parameters used when instantiating a PDB on a standby Enable parallel statement queuing at PDB level Use DBCA to clone PDBs Use Simplified Image-based Oracle Database Installation Oracle Database 18c: New Features for Administrators
9
CDB Fleet 18c A CDB fleet is a collection of different CDBs that can be managed as one logical CDB: To provide the underlying infrastructure for massive scalability and centralized management of many CDBs To provision more than the maximum number of PDBs for an application To manage appropriate server resources for PDBs, such as CPU, memory, I/O rate, and storage systems CDB Fleet High_Speed CDB Fleet Low_Speed Oracle Database 18c introduces the CDB Fleet feature. CDB Fleet aims to provide the underlying infrastructure for massive scalability and centralized management of many CDBs. The maximum number of PDBs in a CDB is 4096 PDBs. A CDB fleet can hold more than 4096 PDBs. Different PDBs in a single configuration require different types of servers to function optimally. Some PDBs might process a large transaction load, whereas other PDBs are used mainly for monitoring. You want the appropriate server resources for these PDBs, such as CPU, memory, I/O rate, and storage systems. Each CDB can use all the usual database features for high availability, scalability, and recovery of the PDBs in the CDB, such as Real Application Clusters (RAC), Data Guard, RMAN, PITR, and Flashback. PDB names must be unique across all CDBs in the fleet. PDBs can be created in any CDB in the fleet, but can be opened only in the CDB where they physically exist. cdb1 PDB1 cdb2 cdb4 cdb3 cdb5 PDB2 PDB4 PDB5 PDB3 PDB44 PDB22 PDB55 PDB33 PDB555 PDB333 Oracle Database 18c: New Features for Administrators
10
CDB Lead and CDB Members
The CDB lead in a fleet is the CDB from which you perform operations across the fleet. The CDB members of the fleet link to the CDB lead through a database link. CDB Fleet High_Speed CDB Lead cdb3 cdb1 cdb2 A CDB fleet contains a CDB lead and CDB members. PDB information from the individual CDBs is synchronized with the CDB lead. The CDB lead is, from the CDB root, able to: Monitor all PDBs of all CDBs in the fleet Report information and collect diagnostic information from all PDBs of all CDBs in the fleet through a cross-container query Query Oracle-supplied objects from all PDBs of all CDBs in the fleet To configure a CDB fleet, define the lead and then the members. To define a CDB as the CDB lead in a CDB fleet, from the CDB root, set the LEAD_CDB database property to TRUE. Still in the CDB root of the CDB lead, use a common user and grant appropriate privileges. To define other CDBs as members of the CDB fleet: Connect to the CDB root of another CDB. Use a common user identical to the common user used in the lead CDB because you have to create a public database link using a fixed user. Set the LEAD_CDB_URI database property to the name of the database link to the CDB lead. It assumes that the network is configured so that the current CDB can connect to the CDB lead using the connect descriptor defined in the link. PDB3 PDB2 CDB root PDB33 PDB22 PDB333 1 CDB member CDB member CONNECT / AS SYSDBA ALTER DATABASE SET lead_cdb = TRUE; 3 4 CONNECT / AS SYSDBA CREATE PUBLIC DATABASE LINK lcdb1 CONNECT TO system IDENTIFIED BY pass USING 'cdb1'; ALTER DATABASE SET lead_cdb_uri='dblink:lcdb1'; Same sequence of statements 2 GRANT sysoper, … TO system CONTAINER = ALL; Oracle Database 18c: New Features for Administrators
11
Use Cases Monitoring and collecting diagnostic information across CDBs from the lead CDB Querying Oracle-supplied objects, such as DBA views, in different PDBs across the CDB fleet Serving as a central location where you can view information about and the status of all the PDBs across multiple CDBs CDB Fleet High Speed CDB Lead Member in the fleet Member in the fleet The CDB lead in the CDB fleet can monitor PDBs across the CDBs in the CDB fleet. You can install a monitoring application in one container and use CDB views and GV$ views to monitor and process diagnostic data for the entire CDB fleet. A cross-container query issued in the lead CDB can automatically execute in all PDBs across the CDB fleet through the Oracle-supplied objects. Using Oracle-supplied or even common application schema objects in different PDBs (or application PDBs) across the CDB fleet, you can use the CONTAINERS clause or CONTAINER_MAP to run queries across all of the PDBs of the multiple CDBs in the fleet. This enables the aggregation of data from PDBs in different CDBs across the fleet. The application can be installed in an application root and each CDB in the fleet can have an application root clone to enable the common application schema across the CDBs. The CDB lead can serve as a central location where you can view information about and the status of all the PDBs across multiple CDBs. cdb1 cdb2 cdb3 PDB1 PDB2 PDB3 PDB22 PDB33 PDB333 SELECT … con$name, cdb$name FROM CONTAINERS (dba_users) GROUP BY …; Oracle Database 18c: New Features for Administrators
12
PDB Snapshot Carousel 18c A PDB snapshot is a named copy of a PDB at a specific point in time. PDB Snapshot Carousel cdb1 Recovery extended beyond flashback retention period Reporting on historical data kept in snapshots Storage-efficient snapshot clones taken on periodic basis Maximum of eight snapshots for CDB and each PDB Example: On Friday, need to recover back to Wednesday. Restore PDB1_snapW. PDB1 In Oracle Database 18c, when you create a PDB, you can specify whether it is enabled for PDB snapshots. A PDB snapshot is an archive file (.pdb) containing the contents of the copy of the PDB at snapshot creation. PDB snapshots allow the recovery of PDBs back to the oldest PDB snapshot available for a PDB. This feature extends the recovery beyond the flashback retention period that necessitates database flashback enabled. The example in the slide shows a situation where you have to restore PDB1 back to Wednesday. A use case of PDB snapshots is reporting on historical data. You might create a snapshot of a sales PDB at the end of the financial quarter. You could then create a PDB based on this snapshot so as to generate reports from the historical data. Every PDB snapshot is associated with a snapshot name and the SCN and timestamp at snapshot creation. The MAX_PDB_SNAPSHOTS database property sets the maximum number of PDB snapshots for each PDB. The default and allowed maximum is 8. When the maximum number is reached for a PDB, and an attempt is made to create a new PDB snapshot, the oldest PDB snapshot is purged. If the oldest PDB snapshot cannot be dropped because it is open, an error is raised. You can decrease this limit for a given PDB by issuing an ALTER DATABASE statement specifying a max number of snapshots. If you want to drop all PDB snapshots, you can set the limit to 0. PDB1_snapW at Wed-00:01 PDB1_snapT at Thu-00:01 Oracle Database 18c: New Features for Administrators
13
Creating PDB Snapshot To create PDB snapshots for a PDB:
DATABASE_PROPERTIES PROPERTY_NAME = MAX_PDB_SNAPSHOTS PROPERTY_VALUE = 8 To create PDB snapshots for a PDB: DBA_PDB_SNAPSHOTS DBA_PDBS SNAPSHOT_MODE Enable a PDB for PDB snapshots. You can create multiple manual PDB snapshots of a PDB. Disable snapshot creation for a PDB. PDB Snapshot Carousel cdb1 SQL> CREATE PLUGGABLE DATABASE pdb1 … SNAPSHOT MODE MANUAL; SQL> ALTER PLUGGABLE DATABASE pdb SNAPSHOT MODE EVERY 24 HOURS; PDB1 By default, a PDB is enabled for PDB snapshots. There are two ways to define PDBs enabled for PDB snapshot creation: Manually: The first example in the slide uses the SNAPSHOT MODE MANUAL clause of the CREATE PLUGGABLE DATABASE or ALTER PLUGGABLE DATABASE statement. No clause gives the same result. Automatically after a given interval of time: The second example in the slide uses the SNAPSHOT MODE EVERY <snapshot_interval> [MINUTES|HOURS] clause of the CREATE PLUGGABLE DATABASE or ALTER PLUGGABLE DATABASE statement. When the amount of time is expressed in minutes, it must be less than When the amount of time is expressed in hours, it must be less than In the second example in the slide, the SNAPSHOT MODE clause specifies that a PDB snapshot is created automatically every 24 hours. Every PDB snapshot is associated with a snapshot name and the SCN and timestamp at snapshot creation. You can specify a name for a PDB snapshot. The third and fourth examples in the slide show how to create PDB snapshots manually, even if the PDB is set to have PDB snapshots created automatically. If PDB snapshots are created automatically, the system generates a name. Archive log files SQL> ALTER PLUGGABLE DATABASE pdb SNAPSHOT pdb1_first_snap; SQL> ALTER PLUGGABLE DATABASE pdb SNAPSHOT pdb1_second_snap; PDB1_snapW at Wednesday PDB1_snapT at Thursday SQL> ALTER PLUGGABLE DATABASE pdb1 SNAPSHOT MODE NONE; Oracle Database 18c: New Features for Administrators
14
Creating PDBs Using PDB Snapshots
PDB Snapshot Carousel cdb1 After a PDB snapshot is created, you can create a new PDB from it: PDB1 SQL> CREATE PLUGGABLE DATABASE pdb1_day_1 FROM pdb1 USING SNAPSHOT <snapshot_name>; You can create a new PDB from an existing PDB snapshot by using the USING SNAPSHOT clause. Provide any of the following: The snapshot name The snapshot SCN at which the snapshot was created The snapshot time at which the snapshot was created SQL> CREATE PLUGGABLE DATABASE pdb1_day_2 FROM pdb1 USING SNAPSHOT AT SCN <snapshot_SCN>; PDB1_snap2 at day-2 PDB1_snap1 at day-1 PDB1_day_1 Oracle Database 18c: New Features for Administrators
15
Dropping PDB Snapshots
Automatic PDB snapshot deletion when MAX_PDB_SNAPSHOTS is reached: Manual PDB snapshot deletion: MAX_PDB_SNAPSHOTS = 4 cdb1 PDB1 When the carousel reaches eight PDB snapshots or the maximum number of PDB snapshots defined, the oldest PDB snapshot is deleted automatically, whether or not it is in use. There is no need to materialize a PDB snapshot in carousel, because PDB snapshots are all full clone. Be aware that if the SNAPSHOT COPY clause is used with the USING SNAPSHOT clause, the SNAPSHOT COPY clause is simply ignored. You can manually drop the PDB snapshots by altering the PDB for which the PDB snapshots were created and using the DROP SNAPSHOT clause. PDB1_snapA at day -8 PDB1_snapB at day -7 PDB1_snapC at day -6 PDB1_snapD at day -5 PDB1_snapE at day -4 PDB1_snapF at day -3 SQL> ALTER PLUGGABLE DATABASE pdb1 DROP SNAPSHOT pdb1_first_snap; Oracle Database 18c: New Features for Administrators
16
Flashbacking PDBs Using PDB Snapshots
cdb1 PDB1 PDB1 PDB1b User error Error detected Drop PDB1 Rename PDB1b to PDB1 PDB snapshots enable the recovery of PDBs back to the oldest PDB snapshot available for a PDB. The example in the slide shows a situation where you detect an error that happened between PDB1_SNAPW and PDB1_SNAPT creation. To recover the situation, perform the following steps: Close PDB1. Create PDB1b from the PDB1_SNAPW snapshot created before the user error. Drop PDB1. Rename PDB1b to PDB1. Open PDB1 and create a new snapshot. Close PDB1 Create PDB1b from PDB1_snapW PDB1_snapW at Wednesday PDB1_snapT at Thursday PDB1_snapS at Saturday Oracle Database 18c: New Features for Administrators
17
Container Map 12c Define a PDB-based partition strategy based on the values stored in a column. Select a column that is commonly used and never updated. Time Identifier (versus creation_date)/Region Name Set the database property CONTAINER_MAP in the application root. Each PDB corresponds to data for a particular partition. In Oracle Database 12c, the CONTAINERS clause (table or view) in a query in the CDB root accesses a table or view in the CDB root and in each of the opened PDBs, and returns a UNION ALL of the rows from the table or view. This concept is extended to work in an application container. CONTAINERS (table or view) queried in an application root accesses the table or view in the application root and in each of the opened application PDBs of the application container. CONTAINERS (table or view) can be restricted to access a subset of PDBs by using a predicate on CON_ID. CON_ID is an implicitly generated column of CONTAINERS (table or view). SELECT fname, lname FROM CONTAINERS(emp) WHERE CON_ID IN (44,56,79); One drawback of CONTAINERS() is that queries need to be changed to add a WHERE clause on CON_ID if only certain PDBs should be accessed. Often, rows of tables or views are horizontally partitioned across PDBs based on a user-defined column. The CONTAINER_MAP database property provides a declarative way to indicate how rows in metadata-linked tables or views are partitioned across PDBs. The CONTAINER_MAP database property is set in the application root. Its value is the name of a partitioned table (the map object). The names of the partitions of the map object match the names of the PDBs in the application container. The columns that are used in partitioning the map object should match the columns in the metadata-linked object that is being queried. The partitioning schemes that are supported for a CONTAINER_MAP map object are LIST, HASH, and RANGE. Note: Container maps can be created in CDB root, but the best practice is to create them in application roots. MAP table Partition N_AMER Partition APAC Partition EMEA NA APAC EMEA DATABASE_PROPERTIES PROPERTY_NAME = CONTAINER_MAP PROPERTY_VALUE = app.tabapp DESCRIPTION = value of container mapping table Oracle Database 18c: New Features for Administrators
18
Container Map: Example
CREATE TABLE tab1 (region …, …); CREATE TABLE tab2 (…, region …); CREATE TABLE app1.app_map ( columns …, region VARCHAR2(20)) PARTITION BY LIST (region) (PARTITION N_AMER VALUES ('TEXAS','CALIFORNIA','MEXICO','CANADA'), PARTITION EMEA VALUES ('UK', 'FRANCE', 'GERMANY'), PARTITION APAC VALUES ('INDIA', 'CHINA', 'JAPAN')); ALTER PLUGGABLE DATABASE SET CONTAINER_MAP = 'app1.app_map'; ALTER TABLE tab1 ENABLE container_map; In a hybrid model, you can create common partitioned tables in the application root, mapping a partition of the table to an application PDB of the application container. For example, the TENANT_GRP1 partition would store data for customers of group1 in the Tenant_GRP1 application PDB. The TENANT_GRP2 partition would store data for customers of group2 in the Tenant_GRP2 application PDB. In a data warehouse model, you can create common partitioned tables in the application root, which are partitioned on a column, such as REGION in the example in the slide, where data is segregated into separate application PDBs of the application container. In the example in the slide, the NA partition stores data for AMERICA, MEXICO, and CANADA as defined in the list, in the NA application PDB. The EMEA partition stores data for UK, FRANCE, and GERMANY as defined in the list, in the EMEA application PDB. DBA_TABLES CONTAINER_MAP_OBJECT = YES PDB$SEED Application ROOT N_AMER APAC EMEA PDB$SEED Oracle Database 18c: New Features for Administrators
19
Query Routed Appropriately
SELECT … FROM some_table WHERE region IN ('CANADA', 'GERMANY', 'INDIA'); - Use CONTAINERS to implicitly AGGREGATE data- SELECT .. FROM fact_tab WHERE region = ‘N_AMER'; UPDATE fact_tab2 SET COLUMN WHERE region = 'FRANCE'; Because data is segregated into separate application PDBs of the application container, querying a container map table, for example, data for N_AMER, automatically retrieves data from the NA application PDB. The query is appropriately routed to the relevant partition and therefore to the relevant application PDB. If you need to retrieve data from a table that is spread over several application PDBs within an application container, use the CONTAINERS clause to aggregate rows from partitions from several application PDBs. PDB$SEED N_AMER APAC EMEA Application ROOT Oracle Database 18c: New Features for Administrators
20
Dynamic Container Map PDB$SEED N_AMER APAC EMEA S_AMER
CREATE PLUGGABLE DATABASE s_amer … CONTAINER_MAP UPDATE (ADD PARTITION s_amer VALUES ('PERU','ARGENTINA')); SELECT .. FROM fact_tab WHERE region = ‘S_AMER'; PDB$SEED N_AMER APAC EMEA Application ROOT S_AMER In Oracle Database 18c, when a PDB is created, dropped or renamed, CONTAINER_MAP defined in CDB root, or application root, or both can be dynamically updated to reflect the change. The CREATE PLUGGABLE DATABASE statement takes an optional clause that describes the key values affiliated with the new PDB. CREATE PLUGGABLE DATABASE s_amer_peru … CONTAINER_MAP UPDATE (SPLIT PARTITION s_amer INTO (partition s_amer ('ARGENTINA'), partition s_amer_peru)); Oracle Database 18c: New Features for Administrators
21
Restricting Operations with Lockdown Profile
CDB_LOCKDOWN_PROFILES Operations, features, and options used by users connected to a given PDB can be disallowed. You can create PDB lockdown profiles from the CDB root only. You can define restrictions through enabled and disabled: Statements and clauses Features Options Setting the PDB_LOCKDOWN parameter to a PDB lockdown profile sets it for all PDBs. Optionally, set the PDB_LOCKDOWN parameter to another PDB lockdown profile for a PDB. Restart the PDBs. CDB root lock_profile2 lock_profile1 ALTER SYSTEM ALTER SYSTEM SET Partitioning In Oracle Database 12c, a PDB lockdown profile whose name is stored in the PDB_LOCKDOWN parameter determines the operations that can be performed in a given PDB. If the PDB_LOCKDOWN parameter is set to a PDB lockdown profile at the CDB root level, and no PDB_LOCKDOWN parameter is set at the PDB level, then the PDB lockdown profile defined at the CDB root level determines the operations that can be performed in all the PDBs. After the PDB_LOCKDOWN parameter is set, the PDB must be bounced before the lockdown profile can take effect. A created PDB lockdown profile cannot derive any restriction rule from another PDB lockdown profile. PDB_OE PDB_LOCKDOWN = lock_profile1 PDB_SALES PDB_HR PDB_LOCKDOWN = lock_profile2 PDB_LOCKDOWN = lock_profile2 CDB1 Oracle Database 18c: New Features for Administrators
22
Lockdown Profiles Inheritance
CDB root App Root APP2 App PDB app2_1 Rule1 Disabled R2 Rule7 Disabled R8 CDB_prof1 App_prof4 PDB_LOCKDOWN = App_prof5 Rules of App_prof5 are in effect. Inherits lockdown profile rules set in its nearest ancestor, CDB_prof1 Rule3 Disabled R4 CDB_prof2 Rule9 Disabled R10 App_prof5 PDB_LOCKDOWN = CDB_prof1 In Oracle Database 18c, you can create lockdown profiles in application roots, and not only in the CDB root. If the PDB_LOCKDOWN parameter in a PDB is set to a name of a lockdown profile different from that in its ancestor, the CDB root or application root for application PDBs, the following governs interaction between restrictions imposed by these lockdown profiles: If the PDB_LOCKDOWN parameter in a regular or application PDB is set to a CDB lockdown profile, lockdown profiles specified by the PDB_LOCKDOWN parameter respectively in the CDB root or application root are ignored. If the PDB_LOCKDOWN parameter in an application PDB is set to an application lockdown profile while the PDB_LOCKDOWN parameter in the application root or CDB root is set to a lockdown profile, in addition to rules stipulated in the application lockdown profile, the PDB lockdown profile inherits the DISABLE rules from the lockdown profile set in its nearest ancestor, the CDB root. If there are conflicts between rules comprising the CDB lockdown profile and the Application lockdown profile, the rules in CDB lockdown profile takes precedence. For example, an OPTION_VALUE clause of a CDB lockdown profile takes precedence over OPTION_VALUE clause of an Application lockdown profile. Rule11 Disabled R12 App_prof6 App PDB app2_2 Inherits lockdown profile rules set in its nearest ancestor, App_prof4 profile In addition, inherits lockdown profile rules set in its nearest ancestor, CDB_prof1 Regular PDB PDB_LOCKDOWN = App_prof4 App_prof4 affects all application PDBs in the application container. Inherits rules from CDB_prof1 Inherits from CDB_prof1 App Root APP1 Rule5 Rule6 App_prof3 App PDB app1_1 Inherits from CDB_prof1 Inherits from CDB_prof1 Oracle Database 18c: New Features for Administrators
23
Static and Dynamic Lockdown Profiles
CDB root / Application root There are two ways to create lockdown profiles by using an existing profile: Static lockdown profiles: Dynamic lockdown profiles: Base_ lock_prof1 Base_lock_prof2 2 2 Added rules Added rules 1 SQL> CREATE LOCKDOWN PROFILE prof3 FROM base_lock_prof1; 1 Dynamic_lock_from_prof2 When a PDB lockdown profile is created, it can derive rules from a “base” lockdown profile. There are two ways of creating lockdown profiles using existing profiles: Static lockdown profiles: When the lockdown profile is being created with the FROM clause, rules comprising the base profile at the time are copied to the static lockdown profile. Any subsequent changes to the base profile do not affect the newly created static lockdown profile. Dynamic lockdown profiles: When the lockdown profile is being created with the INCLUDING clause, the dynamic lockdown profile inherits disabled rules comprising base profile as well as any subsequent changes to the base profile. If rules explicitly added to the newly created dynamic lockdown profile come into conflict with rules comprising the base profile, then the latter takes precedence. Static_lock_from_prof1 3 Automatic added rules SQL> CREATE LOCKDOWN PROFILE prof4 INCLUDING base_lock_prof2; PDB_OE PDB_LOCKDOWN = static_lock_from_prof1 PDB_SALES PDB_HR PDB_LOCKDOWN = base_lock_prof2 PDB_LOCKDOWN = dynamic_lock_from_prof2 CDB1 Oracle Database 18c: New Features for Administrators
24
Refreshable Cloned PDB
Remote source PDB still up and fully functional: Connect to the target CDB2 root to create the database link to CDB1. Switch the shared UNDO mode to local UNDO mode in both CDBs. Clone the remote PDB1 to PDB1_REF_CLONE. Open PDB1_REF_CLONE in read/write mode. Incremental refreshing => Open PDB1_REF_CLONE in RO mode: Manual Automatic (predefined interval) CDB1 CDB root PDB1 UNDO1 SYSTEM USERS SYSAUX Refreshable Copy The Oracle Database 12c cloning technique copies a remote source PDB into a CDB while the remote source PDB is still up and fully functional. Hot remote cloning requires both CDBs to switch from shared UNDO mode to local UNDO mode, which means that each PDB uses its own local UNDO tablespace. In addition, hot cloning allows incremental refreshing in that the cloned copy of the production database can be refreshed at regular intervals. Incremental refreshing means refreshing an existing clone from a source PDB at a point in time that is more recent than the original clone creation to provide fresh data. A refreshable copy PDB can be opened only in read-only mode. Propagating changes from the source PDB can be performed in two ways: Manually (on demand) Automatically at predefined time intervals If the source PDB is not accessible at the moment, the refresh copy needs to be updated. Archive logs are read from the directory specified by the REMOTE_RECOVERY_FILE_DEST parameter to refresh the cloned PDB. Remote source PDB1 DB Link Refresh Hot Cloned PDB1 CDB2 CDB root SQL> CREATE PLUGGABLE DATABASE pdb1_ref_clone FROM REFRESH MODE EVERY 2 MINUTES; PDB1_REF_CLONE UNDO1 SYSTEM USERS SYSAUX Oracle Database 18c: New Features for Administrators
25
Switching Over a Refreshable Cloned PDB
Switchover at the PDB level: A user creates a refreshable clone of a PDB. The roles can be reversed: the refreshable clone can be made the primary PDB. The new primary PDB can be opened in read/write mode. The primary PDB becomes the refreshable clone. Primary role PDB1 R/W PDB1_REF_CLONE Read Only Refreshable copy role In Oracle Database 18c, after a user creates a refreshable clone of a PDB, the roles can be reversed. The refreshable clone can be made the primary PDB which can be opened in read/write mode while the primary PDB becomes the refreshable clone. The ALTER PLUGGABLE DATABASE command with the SWITCHOVER clause must be executed from the primary PDB. The refresh mode can be either MANUAL or EVERY <refresh interval> [MINUTES | HOURS]. REFRESH MODE NONE cannot be specified when issuing this statement. After the switchover operation, the primary PDB becomes the refreshable clone and can only be opened in READ ONLY mode. During the operation, the source is quiesced and any redo generated from the time of the last refresh is applied to the destination to bring it current. The database link user also has to exist in the primary PDB if the refreshable clone exists in another CDB. SQL> CONNECT AS SYSDBA SQL> ALTER PLUGGABLE DATABASE REFRESH MODE EVERY 6 HOURS FROM SWITCHOVER; PDB1 Read Only PDB1_REF_CLONE R/W Primary role Refreshable copy role Oracle Database 18c: New Features for Administrators
26
Instantiating a PDB on a Standby
Creating a PDB on a primary CDB: From an XML file: Copy the data files specified in the XML file to the standby database. Use the STANDBY_PDB_SOURCE_FILE_DIRECTORY parameter to specify a directory location on the standby where source data files for instantiating the PDB may be found Data files are automatically copied. As a clone from another PDB: Copy the data files belonging to the source PDB to the standby database. Use the STANDBY_PDB_SOURCE_FILE_DBLINK parameter to specify the name of a database link which is used to copy the data files from the source PDB to which the database link points. The file copy is automatically done only if the database link points to the source PDB, and the source PDB is open in read-only mode. 12c 18c In Oracle Database 12c, when you create a PDB in the primary CDB with a standby CDB, you must first copy your data files to the standby. Do one of the following, as appropriate: If you plan to create a PDB from an XML file, the data files on the standby are expected to be found in the PDB's OMF directory. If this is not the case, then copy the data files specified in the XML file to the standby database. If you plan to create a PDB as a clone, then copy the data files belonging to the source PDB to the standby database. The path name of the data files on the standby database must be the same as the path name that will result when you create the PDB on the primary in the next step, unless the DB_FILE_NAME_CONVERT database initialization parameter has been configured on the standby. In that case, the path name of the data files on the standby database should be the path name on the primary with DB_FILE_NAME_CONVERT applied. In Oracle Database 18c, you can use initialization parameters to automatically copy the data files to the standby. If you plan to create a PDB from an XML file, the STANDBY_PDB_SOURCE_FILE_DIRECTORY parameter can be used to specify a directory location on the standby where source data files for instantiating the PDB may be found. If they are not found there, files are still expected to be found in the PDB's OMF directory on the standby. If you plan to create a PDB as a clone, the STANDBY_PDB_SOURCE_FILE_DBLINK parameter specifies the name of a database link, which is used to copy the data files from the source PDB to which the database link points. The file copy is done only if the database link points to the source PDB, and the source PDB is open in read-only mode. Otherwise, the user is still responsible for copying datafiles to the OMF location on the standby. 12c 18c Oracle Database 18c: New Features for Administrators
27
PDB-Level Parallel Statement Queuing
18c Possible issues of parallel statements queuing in a PDB: Not sufficient parallel servers available Parallel statements queued for a long time A PDB DBA can make parallel statement queuing work just as it does in a non-CDB. Disable parallel statement queuing at CDB level: PARALLEL_SERVERS_TARGET = 0. Set the PARALLEL_SERVERS_TARGET initialization parameter for individual PDBs. Kill a runaway SQL operation: Dequeue a parallel statement: Define the action when dequeuing: PQ_TIMEOUT_ACTION plan directive PARALLEL_DEGREE_POLICY = AUTO | ADAPTIVE In Oracle Database 12c, a DBA can set the PARALLEL_SERVERS_TARGET initialization parameter at the CDB level. In Oracle Database 18c, the DBA can set PARALLEL_SERVERS_TARGET at the PDB level once parallel statement queuing is disabled at the CDB level. When setting PARALLEL_SERVERS_TARGET at the CDB level to 0, parallel statement queues at the PDB level, based on the number of active parallel servers used by that PDB and the PDB’s PARALLEL_SERVERS_TARGET. In Oracle Database 18c, the DBA can use the ALTER SYSTEM CANCEL SQL command to kill a SQL operation in another session that is consuming excessive resources, including parallel servers, either in the CDB root or PDB. Sid, Serial: The session ID and serial number of the session that runs the SQL statement Inst_id: The instance ID if the session is connected to an instance of a RAC database Sql id: Optionally the SQL ID The session that consumed excessive resources receives an ORA-01013: user requested cancel of current operation message. In Oracle Database 12c, if a parallel statement has been queued for a long time, the DBA can dequeue the statement by using the DBMS_RESOURCE_MANAGER.DEQUEUE_PARALLEL_STATEMENT procedure. The DBA can also set the PARALLEL_STMT_CRITICAL plan directive to BYPASS_QUEUE. Parallel statements from this consumer group are not queued. The default is FALSE, which means that parallel statements are eligible for queuing. SQL> ALTER SYSTEM CANCEL SQL '272,31460'; SQL> EXEC dbms_resource_manager.dequeue_parallel_statement() Oracle Database 18c: New Features for Administrators
28
Using DBCA to Clone PDBs
Clones PDB in hot mode Creates the datafiles directory for the new PDB Opens the new PDB In Oracle Database 12c, DBCA enables you to create a new PDB. The new PDB is created as a clone of the CDB seed. In Oracle Database 18c, DBCA enables you to create a new PDB as a clone of an existing PDB, and not necessarily from the CDB seed. Oracle Database 18c: New Features for Administrators
29
Simplified Image-based Oracle Database Installation
Starting with Oracle Database 18c, the Oracle Database software is available as an image file for download and installation. Using image-based installation, you can install and upgrade Oracle Database for single- instance and cluster configurations. Extract the image software into the directory where you want your Oracle home to be located, and then run runInstaller to start the Oracle Database installation. Starting with Oracle Database 18c, the Oracle Database software is available as an image file for download and installation. Extract the image software into the directory where you want your Oracle home to be located, and then run the runInstaller script to start the Oracle Database installation. Using image-based installation, you can install and upgrade Oracle Database for single-instance and cluster configurations. This installation feature streamlines the installation process and supports automation of large-scale custom deployments. You can also use this installation method for deployment of customized images, after you patch the base-release software with the necessary Release Updates (RUs) or Release Update Revisions (RURs). You must extract the image software (db_home.zip) into the directory where you want your Oracle Database home to be located, and then run the runInstaller script to start the Oracle Database installation and configuration. Oracle recommends that the Oracle home directory path you create is in compliance with the Oracle Optimal Flexible Architecture recommendations. $ mkdir -p /u01/app/oracle/product/18.0.0/dbhome_1 $ chown oracle:oinstall /u01/app/oracle/product/18.0.0/dbhome_1 $ cd /u01/app/oracle/product/18.0.0/dbhome_1 $ unzip -q /tmp/db_home.zip $ ./runInstaller Oracle Database 18c: High Availability New Features
30
Summary In this lesson, you should have learned how to:
Manage a CDB fleet Manage PDB snapshots Use a dynamic container map Explain lockdown profile inheritance Describe refreshable copy PDB switchover Identify the parameters used when instantiating a PDB on a standby Enable parallel statement queuing at PDB level Use DBCA to clone PDBs Use Simplified Image-based Oracle Database Installation Oracle Database 18c: New Features for Administrators
31
Introduction Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Conclusion 1 2 3 4 5 6 7
32
Objectives After completing this module, you should be able to:
Create schema-only accounts Isolate a new PDB keystore Convert a PDB to run in isolated or united mode Migrate PDB keystore between keystore types Create user-defined TDE master keys Protect fixed-user database link passwords Export and import fixed-user database links with encrypted passwords Configure encryption of sensitive data in Database Replay files Perform Database Replay capture and replay in a database with Database Vault Explain Enterprise users integration with Active Directory Oracle Database 18c: New Features for Administrators
33
Schema-Only Account Ensure that user cannot log in to the instance:
Enforce data access through the application. Secure schema objects. Prevent objects from being dropped by the connected schema. Use the NO AUTHENTICATION clause. Can be replaced by IDENTIFIED BY VALUES A schema-only account cannot be: Granted system administrative privileges Used in database links Application designers may want to create accounts that contain the application data dictionary, but are not allowed to log in to the instance. This can be used to enforce data access through the application, separation of duties at the application level, and other security mechanisms. In addition, utility accounts can be created, but remain inaccessible by denying the ability to log in except under controlled situations. Until Oracle Database 12c, DBAs create accounts that do not need to log in to the instance or maybe rarely log in to the instance. Nevertheless, for all of these accounts, there are default passwords and there are also requirements to rotate the passwords. In Oracle Database 18c, an account can be created with the NO AUTHENTICATION clause to ensure that the account is not permitted to log in to the instance. Removing the password and removing the ability to log in essentially just leaves a schema. The schema account can be altered to allow login, but can then have the password removed. The ALTER USER statement can be used to disable or re-enable the login capability. The DBA_USERS view has a new column, AUTHENTICATION_TYPE, which displays NONE when NO AUTHENTICATION is set, and PASSWORD when a password is set. DBA_USERS AUTHENTICATION_TYPE = NONE | PASSWORD Oracle Database 18c: New Features for Administrators
34
Encrypting Data Using Transparent Data Encryption
DD Table Table EMP Table Key EMP K1 DEPT K2 TAB1 K3 ID Name CCN 1 ANN 3///?. 2 TOM Éà$ù#1 3 JIM Decrypts into clear text Encrypts into cipher text Transparent Data Encryption (TDE) provides easy-to-use protection for your data without requiring changes to your applications. TDE allows you to encrypt sensitive data in individual columns or entire tablespaces without having to manage encryption keys. TDE does not affect access controls, which are configured using database roles, secure application roles, system and object privileges, views, Virtual Private Database (VPD), Oracle Database Vault, and Oracle Label Security. Any application or user that previously had access to a table will still have access to an identical, encrypted table. TDE is designed to protect data in storage, but does not replace proper access control. TDE is transparent to existing applications. Encryption and decryption occurs at different levels depending on whether it is tablespace or column level, but in either case, encrypted values are not displayed and are not handled by the application. TDE eliminates the ability of anyone who has direct access to the data files to gain access to the data by circumventing the database access control mechanisms. Even users with access to the data file at the operating system level cannot see the data unencrypted. TDE stores the master key outside the database in an external security module, thereby minimizing the possibility of both personally identifiable information (PII) and encryption keys being compromised. TDE decrypts the data only after database access mechanisms have been satisfied. TDE enables encryption for sensitive data in columns without requiring users or applications to manage the encryption key. The default external security module is a software keystore. Tables keys Data bocks Tablespaces keys Master encryption key: to encrypt and decrypt TBS_HR TBS_APPS Keystore Oracle Database 18c: New Features for Administrators
35
Managing Keystore in CDB and PDBs
There is one single keystore for CDB and all PDBs. There is one master key per PDB to encrypt PDB data. The master key must be transported from the source database keystore to the target database keystore when a PDB is moved from one host to another. In a multitenant container database (CDB), the root container and each pluggable database (PDB) have their own master key used to encrypt data in the PDB, all of them stored in the common single keystore. The master key must be transported from the source database keystore to the target database keystore when a PDB is moved from one host to another. The master keys are stored in a PKCS#12 software keystore or a PKCS#11-based HSM, outside the database. For the database to use TDE, the keystore must exist. To create a software keystore and a master key, perform the following steps: Create a directory to hold the keystore, as defined by default in V$ENCRYPTION_WALLET.WRL_PARAMETER, which is accessible to the Oracle software owner. If you plan to define another location for the keystore, specify an entry in the $ORACLE_HOME/network/admin/sqlnet.ora file and create the appropriate directory. ENCRYPTION_WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /u01/app/oracle/other_admin_dir/orcl/wallet))) sqlnet.ora Keystore location Master PDB key root PDBA PDBB PDBC Master root key CDB keystore Oracle Database 18c: New Features for Administrators
36
Managing Keystore in CDB and PDBs
There is still one single keystore for CDB and optionally one keystore per PDB. There is still one master key per PDB to encrypt PDB data, stored in the PDB keystore. Modes of operation United mode: PDB keys are stored in the unique CDB root keystore. Isolated mode: PDBs keys are stored in their own keystore. Mix mode: Some PDBs use united mode, some use isolated mode. In an Oracle Database 12c multitenant container database (CDB), the CDB root and each pluggable database (PDB) have their own master key used to encrypt data in the PDB, all of them stored in the common single keystore. The master key must be transported from the source database keystore to the target database keystore when a PDB is moved from one host to another. The master keys are stored in a PKCS#12 software keystore or a PKCS#11-based HSM, outside the database. For the database to use TDE, the keystore must exist in a directory defined by the ENCRYPTION_WALLET_LOCATION location in the sqlnet.ora file. In Oracle Database 12c, the Multitenant architecture was mainly focused on providing support for database consolidation. In Oracle Database 18c, the Multitenant architecture continues to provide support for database consolidation; however, focus is on independent, isolated PDB administration. To support independent, isolated PDB administration, support for separate keystores for each PDB is now provided. Providing the PDBs with their own keystore is called the “Isolated Mode.” Having independent keystores allows PDBs to be managed independently of each other. The shared keystore mode provided with Oracle Database 12c is now called "United Mode". Both modes can be used at the same time, in a single multitenant environment, with some PDBs sharing a common keystore in united mode and some having there own independent keystores in isolated mode. This feature allows each PDB running in isolated mode within in a CDB to manage its own keystore. Isolated mode allows a tenant to manage its TDE keys independently, and supports the requirement for a PDB to be able to use its own independent keystore password. The project aims to allow the customer to decide how the keys of a given PDB are protected, either with the independent password of an isolated keystore, or with the password of the united keystore. root PDBA PDBB PDBC TDE master key CDB keystore TDE PDB key TDE PDB key PDB keystore TDE PDB key PDB keystore Oracle Database 18c: New Features for Administrators
37
Keystore Management Changes for PDB
V$ENCRYPTION_WALLET ENCRYPTION_MODE = NONE PDBs can optionally have their own keystore, allowing tenants to manage their own keys. Define the shared location for the CDB root and PDB keystores: Define the default PDB keystore type for each future isolated PDB, and then define a different file type in each isolated PDB if necessary: United: WALLET_ROOT/<component>/ewallet.p12 Isolated: WALLET_ROOT/<pdb_guid>/<component>/ewallet.p12 SQL> ALTER SYSTEM SET wallet_root = /u01/app/oracle/admin/ORCL/tde_wallet; In Oracle Database 12c, the ENCRYPTION_WALLET_LOCATION parameter in the $ORACLE_HOME/network/admin/sqlnet.ora file defines the path to the united keystore. In Oracle Database 18c, no isolated keystore can be used unless the new initialization WALLET_ROOT parameter is set, replacing ENCRYPTION_WALLET_LOCATION. The WALLET_ROOT initialization parameter specifies the path to the root of a directory tree containing a subdirectory for each PDB GUID, under which a directory structure is used to store the various keystores associated with features such as TDE, EUS, and SSL. A new column, KEYSTORE_MODE, is added to the V$ENCRYPTION_WALLET view with values NONE, ISOLATED, and UNITED. If all PDBs use the united mode, you can still create the CDB keystore by using the command without requiring the WALLET_ROOT parameter to be set: SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/admin/ORCL/tde_wallet/' IDENTIFIED BY <password>; All of the existing commands allowed only in the CDB root until Oracle_Database 12c are now supported in Oracle_Database 18c at the PDB level, with the understanding that the ADMINISTER KEY MANAGEMENT privilege first needs to be granted to a newly-created security officer for the PDB. SQL> ALTER SYSTEM SET tde_configuration = 'KEYSTORE_CONFIGURATION=FILE'; CDB root and PDBA /u01/app/oracle/admin/ORCL/tde_wallet/tde/ewallet.p12 PDBB /u01/app/oracle/admin/ORCL/tde_wallet/51FE2A AE6/tde/ewallet.p12 PDBC /u01/app/oracle/admin/ORCL/tde_wallet/7893AB ZC8/tde/ewallet.p12 Oracle Database 18c: New Features for Administrators
38
Defining the Keystore Type
Values of keystore types allowed: FILE OKV (Oracle Key Vault) HSM (Hardware Security Module) FILE|OKV: Reverse-migration from OKV to FILE has occurred FILE|HSM: Reverse-migration from HSM to FILE has occurred OKV|FILE: Migration from FILE to OKV has occurred HSM|FILE: Migration from FILE to HSM has occurred In isolated mode, when the CDB is in mounted state: A per-PDB dynamic instance initialization parameter, TDE_CONFIGURATION, is added, which takes an attribute-value list. TDE_CONFIGURATION has only two attributes: KEYSTORE_CONFIGURATION: Can take the values FILE and those defined in the slide CONTAINER: Specifies the PDB. This attribute can be specified only when setting the parameter in the CDB root when performing crash recovery or media recovery and the database is in the MOUNTED state. If the control file is lost, it may be necessary to run an ALTER SYSTEM command in the CDB root to set the TDE_CONFIGURATION parameter appropriately for each PDB. Because the command needs to be run in the CDB root, the PDB name is provided via the additional attribute CONTAINER, for example as follows: ALTER SYSTEM SET TDE_CONFIGURATION='CONTAINER=CDB1_PDB1;KEYSTORE_CONFIGURATION=FILE' SCOPE=MEMORY; The command configures the CDB1_PDB1 to run in isolated mode using its own keystore. SQL> STARTUP MOUNT SQL> ALTER SYSTEM SET tde_configuration='CONTAINER=pdb1; KEYSTORE_CONFIGURATION=FILE'; Oracle Database 18c: New Features for Administrators
39
Isolating a PDB Keystore
V$ENCRYPTION_WALLET ENCRYPTION_MODE = ISOLATED SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE IDENTIFIED BY <united_keystore_pass> ; Create / open the CDB root keystore: Connect as the PDB security admin to the newly created PDB to: Create the PDB keystore Open the PDB keystore Create the TDE PDB key in the PDB keystore pass TDE master key TDE PDB key WALLET_ROOT/<pdb_guid>/tde/ewallet.p12 No keystore mgt In the case of a newly-created PDB, the ADMINISTER KEY MANAGEMENT privilege needs to be granted to a newly-created local user in the PDB, who acts as the security officer for the new PDB. It is assumed that this security officer is provided with the password of the united keystore, because that password is required to gain access to the TDE master key. Note that knowledge of this password does not allow the user to perform an ADMINISTER KEY MANAGEMENT UNITE KEYSTORE operation. Additional privilege scope is needed for the unite keystore operation. The PDB security officer is then allowed to invoke the ADMINISTER KEY MANAGEMENT CREATE KEYSTORE command, which creates an isolated keystore for the PDB and automatically configures the keystore of the PDB to run in isolated mode. Note: Observe that the ADMINISTER KEY MANAGEMENT CREATE KEYSTORE command does not use the definition of the keystore location. The keystore location is defined in the WALLET_ROOT parameter. In the V$ENCRYPTION_WALLET view, the KEYSTORE_MODE column shows NONE for the CDB root container. For the isolated PDB, the KEYSTORE_MODE column, shows ISOLATED. SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE IDENTIFIED BY <isolated_keystore_pass> ; SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY <isolated_keystore_pass> ; SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY <isolated_keystore_pass> WITH BACKUP; Oracle Database 18c: New Features for Administrators
40
Converting a PDB to Run in Isolated Mode
V$ENCRYPTION_WALLET ENCRYPTION_MODE = ISOLATED In the CDB root: Create a common user to act as the security officer Grant the ADMINISTER KEY MANAGEMENT privilege commonly to Connect as the security officer to the PDB and create the keystore in the PDB. If you want to convert a PDB to run in isolated mode, the ADMINISTER KEY MANAGEMENT privilege needs to be commonly granted to a newly-created common user who will act as the security officer for the PDB. The security officer for each PDB will now be managing their own keystore. Then after logging in to the PDB as the security officer, the ADMINISTER KEY MANAGEMENT ISOLATE KEYSTORE command must be executed to isolate the key of the PDB into a separate isolated keystore. The isolated keystore is created by this command, with its own password. All of the previously active (historical) master keys associated with the PDB are moved to the isolated keystore. SQL> ADMINISTER KEY MANAGEMENT ISOLATE KEYSTORE IDENTIFIED BY <isolated_keystore_password> FROM ROOT KEYSTORE IDENTIFIED BY [EXTERNAL STORE | <united_keystore_password>] WITH BACKUP; TDE master key TDE PDB key WALLET_ROOT/tde/ewallet.p12 TDE PDB key TDE PDB key moved WALLET_ROOT/51FE2A AE6/tde/ewallet.p12 Oracle Database 18c: New Features for Administrators
41
Converting a PDB to Run in United Mode
V$ENCRYPTION_WALLET ENCRYPTION_MODE = UNITED In the CDB root: The security officer of the CDB exists. The security officer of the CDB is granted the ADMINISTER KEY MANAGEMENT privilege commonly. Connect as the security officer to the PDB and unite the TDE PDB key with those of the CDB root. If a PDB no longer wants to manage its own separate keystore in isolated mode, the security officer can decide to unite its keystore with that of the CDB root, and allow the security officer of the CDB root administer its keys. The PDB security officer who is a common user with the ADMINISTER KEY MANAGEMENT privilege granted commonly logs in to the PDB and issues the ADMINISTER KEY MANAGEMENT UNITE KEYSTORE command to unite the keys of the PDB with those of the CDB root. When the keystore of a PDB is being united with that of the CDB root, all of the previously active (historical) master keys associated with the PDB are also moved to the keystore of the CDB root. When V$ENCRYPTION_WALLET is queried from the united PDB, the PDB being configured to use the CDB root keystore, in this case the KEYSTORE_MODE column, shows UNITED. SQL> ADMINISTER KEY MANAGEMENT UNITE KEYSTORE IDENTIFIED BY <isolated_keystore_password> WITH ROOT KEYSTORE IDENTIFIED BY [EXTERNAL STORE | <united_keystore_password>] [WITH BACKUP [USING <backup_id>]]; TDE PDB key WALLET_ROOT/51FE2A AE6/tde/ewallet.p12 TDE master key TDE PDB key TDE PDB keys moved WALLET_ROOT/tde/ewallet.p12 Oracle Database 18c: New Features for Administrators
42
Migrating a PDB Between Keystore Types
To migrate a PDB from using wallet as the keystore to using Oracle Key Vault if the PDB is running in isolated mode: Upload the TDE encryption keys from the isolated keystore to Oracle Key Vault using a utility. Set the TDE_CONFIGURATION parameter of the PDB to the appropriate value: To migrate a PDB from using wallet as the keystore to using Oracle Key Vault if the PDB is running in isolated mode, the TDE encryption keys from the isolated keystore need to be uploaded to Oracle Key Vault by using a utility such as the okvutil upload command to migrate an existing TDE wallet to Oracle Key Vault. Then the TDE_CONFIGURATION parameter of the PDB needs to be changed to KEYSTORE_CONFIGURATION=OKV. Refer to the following Oracle documentation: Oracle Database Advanced Security Guide 18c – Chapter Managing the Keystore and the Master Encryption Key - Migration of Keystores to and from Oracle Key Vault. Oracle Key Vault Administrator's Guide 12c Release 2 (12.2) – Chapter Migrating an Existing TDE Wallet to Oracle Key Vault – Oracle Key Vault Use Case Scenarios Oracle Key Vault Administrator's Guide 12c Release 2 (12.2) – Chapter Enrolling Endpoints for Oracle Key Vault – okvutil upload Command SQL> ALTER SYSTEM SET tde_configuration = 'KEYSTORE_CONFIGURATION=OKV'; Oracle Database 18c: New Features for Administrators
43
Creating Your Own TDE Master Encryption Key
Create and then use your own TDE master encryption key by providing raw binary data: Or, create and activate your TDE master encryption key: SQL> ADMINISTER KEY MANAGEMENT CREATE KEY ' F88967A A6F4E460E892318E307F017BA048707B402493C' USING ALGORITHM 'SEED128' FORCE KEYSTORE IDENTIFIED BY "WELcome_12" WITH BACKUP; This capability is needed by Database Cloud Services to support the integration with the Key Management service. Instead of requiring that TDE master encryption keys always be generated in the database, this supports using master keys generated elsewhere. The ADMINISTER KEY MANAGEMENT command allows you to either SET your own TDE master encryption key if you want to create and activate the TDE master encryption key within a single statement, or CREATE if you want to create the key for later use, without activating it yet. To activate the generated key, first find the key in the V$ENCRYPTION_KEYS view and then use the USE KEY clause of the same command. Define the value for the key: MKID: The master encryption key ID is a 16-byte hex-encoded value that you can create or have Oracle Database generate. If you omit this value, Oracle Database generates it. MK: The master encryption key is a hex-encoded value that you can create or have Oracle Database generate, either 32 bytes for the AES256, ARIA256, and GOST256 algorithms or 16 bytes for the SEED128 algorithm. The default algorithm is AES256. If you omit both the MKID and MK values, then Oracle Database generates both of the values for you. To complete the operation, the keystore must be opened. The keystore can be temporarily opened by using the FORCE KEYSTORE clause. SQL> ADMINISTER KEY MANAGEMENT USE KEY 'ARAgMEBQYHCAERITFBUWFxgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' IDENTIFIED BY "WELcome_12" WITH BACKUP; SQL> ADMINISTER KEY MANAGEMENT SET KEY ' :3D432109DF A6F4E460E892318c' USING ALGORITHM 'SEED128' IDENTIFIED BY "WELcome_12" WITH BACKUP; Oracle Database 18c: New Features for Administrators
44
Protecting Fixed-User Database Links Obfuscated Passwords
How to prevent an intruder from decrypting an obfuscated database link password? Passwords for DB links are stored obfuscated in the database. Passwords for DB links are not exported, being replaced with ‘x’. Set the COMPATIBLE initialization parameter to Open the CDB root keystore and PDB isolated keystores if necessary. Enable credentials encryption in the dictionary: at CDB root or PDB level: Display the enforcement status of the credentials encryption: Display the usability of encrypted passwords of database links: 12c 18c In Oracle Database 12c, fixed-user database links passwords are obfuscated in the database. Hackers find ways to deobfuscate the passwords. In Oracle Database 18c, you can have the DB Link passwords be replaced with “x’ in the dump file by enabling the credentials encryption in the dictionary of the CDB root and PDBs. If the operation is performed in the CDB root, the CDB root keystore is required and opened. If the operation is performed in a PDB and if the PDB works in isolated mode, the PDB keystore is required and opened. To perform this operation, the SYSKM privilege is required. If you need to disable the credentials encryption, use the following statement: SQL> ALTER DATABASE DICTIONARY DELETE CREDENTIALS KEY; SQL> ALTER DATABASE DICTIONARY ENCRYPT CREDENTIALS; DICTIONARY_CREDENTIALS_ENCRYPT ENFORCEMENT = ENABLED | DISABLED DBA_DB_LINKS VALID = YES | NO Oracle Database 18c: New Features for Administrators
45
Importing Fixed-User Database Links Encrypted Passwords
In the dump file of Data Pump export: Obfuscated values for DB link passwords Passwords not protected Unless exported with ENCRYPTION_PWD_PROMPT=YES If credentials encryption enabled in the dictionary: Invalid value for DB link password in the dump file Warning message during export and import Reset password for the DB link after import 12c 18c In Oracle Database 12c, because fixed-user database links passwords are obfuscated in the database, Data Pump export exports the database links passwords with the obfuscated value. In this case, Oracle recommends that you set the ENCRYPTION_PASSWORD parameter on the expdp command so that the obfuscated database link passwords are encrypted in the dump file. To further increase security, Oracle recommends that you set the ENCRYPTION_PWD_PROMPT parameter to YES so that the password can be entered interactively from a prompt, instead of being echoed on the screen with the ENCRYPTION_PASSWORD parameter. In Oracle Database 18c, if you enabled the encryption of credentials in the database, a Data Pump export operation stores an invalid password for the database link password in the dump file. A warning message during the export and import operations tells you that after the import, the password for the database link has to be reset. If you do not reset the password of the imported database link, the following error appears when you attempt to use it during a query: SELECT * FROM * ERROR at line 1: ORA-28449: cannot use an invalidated database link $ expdp … ORA-39395: Warning: object SYSTEM.TEST requires password reset after import SQL> ALTER DATABASE LINK lk1 CONNECT TO test IDENTIFIED BY <reset_password>; Oracle Database 18c: New Features for Administrators
46
DB Replay: The Big Picture
Postchange test system Prechange production system DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE Clients/app servers 2 Capture directory Replay system In capture files, clear byte strings for: Connection SQL text Bind values Shadow capture file 1 Shadow capture file Process capture files Because Oracle Database 11g manages system changes, a significant benefit is the added confidence to the business in the success of performing changes. The record-and-replay functionality offers an accurate method to test the impact of a variety of system changes including database upgrade, configuration changes, and hardware upgrade. A useful application of Database Replay is to test the performance of a new server configuration. Consider a customer who is utilizing a single instance database and wants to move to a RAC setup. The customer records the workload of an interesting period and then sets up a RAC test system for replay. During replay, the customer is able to monitor the performance benefit of the new configuration by comparing the performance to the recorded system. This can also help convince the customer to move to a RAC configuration after seeing the benefits of using the Database Replay functionality. Another application is debugging. You can record and replay sessions emulating an environment to make bugs more reproducible. Manageability feature testing is another benefit. Self-managing and self-healing systems need to implement this advice automatically (“autonomic computing model”). Multiple replay iterations allow testing and fine-tuning of the control strategies’ effectiveness and stability. The database administrator, or a user with special privileges granted by the DBA, initiates the record-and-replay cycle and has full control over the entire procedure. Shadow capture file Test system with changes Production system Shadow capture file Database restore Database backup 3 Production database DBMS_WORKLOAD_REPLAY.START_REPLAY DBMS_WORKLOAD_CAPTURE.START_CAPTURE $ wrc userid=system password=oracle replaydir=/dbreplay DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE Oracle Database 18c: New Features for Administrators
47
Encryption of Sensitive Data in Database Replay Files
Protect data in capture files with TDE encryption. The keystore is used to: Store the oracle.rat.database_replay.encryption user password Retrieve the oracle.rat.database_replay.encryption user password during workload capture, process, and replay On the server side, the DB Replay user password is set before the workload capture in the keystore. On the client side, the DB Replay client password is set in the SSO wallet. They must match together. Before the whole process starts, the DBA must set the password for the DB Replay user (oracle.rat.database_replay.encryption identifier) in the keystore. The DB Replay user password is then mapped to an encryption key and stored in the keystore. During the capture, the oracle.rat.database_replay.encryption password is retrieved from the keystore and used to encrypt the sensitive fields. This encryption key is the first-level encryption key used to generate a second- level encryption key for each capture file. The second-level encryption key is encrypted by the first-level encryption key and saved in the capture file header. The second-level encryption key is applied to all sensitive fields in capture files, such as database connection strings, SQL text, and SQL bind values. During the process and replay, the data encrypted in the capture files is decrypted. a) During the process and replay, the oracle.rat.database_replay.encryption password is verified against the keystore to see if it matches the one used during the workload capture. b) If the verification is successful, the password is mapped to the first-level encryption key, which subsequently is used to recover the second-level encryption key for each capture. c) Once the second-level encryption key is available, all sensitive fields are decrypted. Capture file key Capture file key in header 5 Capture file 1 Capture file 1 Encrypts data PROCESS/REPLAY 3 4 scott/tiger hr/xxyyy SELECT c1 FROM t1; SELECT * FROM t2 WHERE c1=:b1; Retrieves client password Generates a second-level encryption key for each file Decrypts data cwallet.sso 6 wrc 7 Retrieves user password Set password for Database Replay user oracle.rat.database_replay.encryption in keystore User password mapped with first-level encryption key 1 Keystore CAPTURE 2 Oracle Database 18c: New Features for Administrators
48
Capture Setup for DB Replay
DBA_WORKLOAD_CAPTURES ENCRYPTION = AES256 Set a password for the Database Replay user oracle.rat.database_replay.encryption in the keystore. Start the capture by using an encryption algorithm. Stop the capture. SQL> ADMINISTER KEY MANAGEMENT ADD SECRET '<replaypass>' FOR CLIENT 'oracle.rat.database_replay.encryption' IDENTIFIED BY <pass> WITH BACKUP; Before the whole process can encrypt sensitive data, set the password for the DB Replay user (oracle.rat.database_replay.encryption identifier) in the keystore. To delete the secret password from the keystore, use the ADMINISTER KEY MANAGEMENT DELETE SECRET FOR CLIENT 'oracle.rat.database_replay.encryption‘ command. Then when starting the capture, specify which encryption algorithm used to encrypt the data in the workload capture files by using the new ENCRYPTION parameter of the DBMS_WORKLOAD_CAPTURE.START_CAPTURE procedure: NULL: Capture files are not encrypted (default). AES128: Capture files are encrypted using AES128. AES192: Capture files are encrypted using AES192. AES256: Capture files are encrypted using AES256. Stop the capture when workload is sufficient for testing. You can encrypt data in existing capture files by using the DBMS_WORKLOAD_CAPTURE.ENCRYPT_CAPTURE procedure: SQL> exec DBMS_WORKLOAD_CAPTURE.ENCRYPT_CAPTURE(- SRC_DIR => 'OLTP', DST_DIR => 'OLTP_ENCRYPTED') You can also decrypt data in capture files by using the DBMS_WORKLOAD_CAPTURE.ENCRYPT_CAPTURE procedure: SQL> exec DBMS_WORKLOAD_CAPTURE.DECRYPT_CAPTURE(- SRC_DIR => 'OLTP_ENCRYPTED', DST_DIR => 'OLTP_DECRYPTED') SQL> exec DBMS_WORKLOAD_CAPTURE.START_CAPTURE( NAME => 'OLTP_peak', DIR => 'OLTP', ENCRYPTION => 'AES256') SQL> exec DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE() Oracle Database 18c: New Features for Administrators
49
Process and Replay Setup for DB Replay – Phase 1
Process the capture after moving the capture files to the testing server environment. Initialize the replay after setting various replay parameters. Prepare the replay to start. SQL> exec DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir => 'OLTP') SQL> exec DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(replay_name => 'R', replay_dir => 'OLTP') Process the capture. Initialize the replay. Prepare the replay to start. During the three steps, the keystore must be open. The password for the DB Replay user (oracle.rat.database_replay.encryption identifier) is retrieved and verified in the keystore. SQL> exec DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY () Oracle Database 18c: New Features for Administrators
50
Process and Replay Setup for DB Replay – Phase 2
On the client side, set up a client-side keystore including the oracle.rat.database_replay.encryption client password. Start the replay clients. Start replaying for the clients waiting in step 8. $ mkdir /tmp/replay_encrypt_cwallet $ mkstore -wrl /tmp/replay_encrypt_cwallet -create $ mkstore -wrl /tmp/replay_encrypt_cwallet -createEntry 'oracle.rat.database_replay.encryption' "replaypass" On the client side, the password for the encrypted capture is retrieved from a client-side SSO wallet on disk. The wrc replay client retrieves the password by the identifier oracle.rat.database_replay.encryption. This ensures the safety of user password without compromising automation. Set up a client-side wallet including the same oracle.rat.database_replay.encryption client password defined in the keystore of the production database where the capture was executed. Start as many wrc replay clients as required to replay the capture workload. The client retrieves the password for the oracle.rat.database_replay.encryption from the client-side SSO wallet created in step 7. In the wrc command line, specify the directory where the client-side SSO wallet was created by using the WALLETDIR parameter. Start the replay. $ wrc REPLAYDIR=/tmp/dbreplay USERID=system WALLETDIR= tmp/replay_encrypt_cwallet Workload Replay Client: … Wait for the replay to start (21:47:01) $ wrc REPLAYDIR=/tmp/dbreplay USERID=system WALLETDIR= tmp/replay_encrypt_cwallet Workload Replay Client: … Wait for the replay to start (21:47:01) SQL> exec DBMS_WORKLOAD_REPLAY.START_REPLAY () Oracle Database 18c: New Features for Administrators
51
Oracle Database Vault: Privileged User Controls
Database DBA is blocked from viewing HR data: Compliance and protection from insiders HR App Owner is blocked from viewing FIN data: Eliminates security risks from server consolidation SELECT * FROM HR.EMP DBA Oracle Database Vault (DB Vault) helps customers control privileged user access to sensitive application data stored in the database. The slide shows how DB Vault realms prevent privileged database users and even privileged applications from accessing data outside their authorization. Realms can be placed around a single table or an entire application. Once in place, they prevent users with privileges such as the DBA role from accessing data protected by the realm. Interestingly enough, many applications today also have privileged accounts. When applications are consolidated, it can be advantageous to put realms around the individual applications to prevent, as an example, an application owner from “peeking over the fence” at the contents of another application, perhaps containing sensitive financial data. HR HR Realm HR App FIN FIN Realm FIN App Oracle Database 18c: New Features for Administrators
52
Database Vault: Access Control Components
The following components of Database Vault provide highly configurable access control: Realms and authorization types Participant Owner Command rules Rule sets Secure application roles Factors The DBA can view the ORDERS table data. The security manager protects the OE.ORDERS table with a realm. The DBA can no longer view the ORDERS table data. SQL> SELECT order_total FROM oe.orders WHERE customer_id = 101; ORDER_TOTAL Realms: A boundary defined around a set of objects in a schema, a whole schema, multiple schemas, or roles. A realm protects the objects in it, such as tables, roles, and packages from users exercising system or object privileges, such as SELECT ANY TABLE or even from the schema owner. A realm may also have authorizations given to users or roles as participants or owners. The security manager can define which users are able to access the objects that are secured by the realm via realm authorizations and under which conditions (rule sets). In Oracle Database Vault 12c, there are two authorization types within a realm: Participant: The grantee is able to access the realm-secured objects. Owner: The grantee has all the access rights that a participant has, and can also grant privileges (if they have either GRANT ANY ROLE or were granted that privilege with the WITH ADMIN option) to others on the objects in the realm. Command rules: An ability to block the specified SQL command under a set of specific conditions (rule sets) Rule sets: A collection of rules that are evaluated for the purpose of granting access. Rule sets work with both realms and command rules to create a trusted path. Rule sets can incorporate factors to create this trusted path to allow fine grained control on realms and command rules. Examples of realms and command rules configured with rule sets: Realms can be restricted to accept only SQL from authorized users from a specific set of IP addresses. Command rules can limit sensitive commands to certain DBAs from local workstations during business hours. SQL> SELECT order_total FROM oe.orders WHERE customer_id = 101; SELECT order_total FROM oe.orders * ERROR at line 1: ORA-01031: insufficient privileges Oracle Database 18c: New Features for Administrators
53
DB Replay Capture and Replay with Database Vault
New realm authorization types to allow users to run DB Replay capture and replay: DBCAPTURE authorization type DBREPLAY authorization type Managed using Database Vault admin procedures: DVSYS.DBMS_MACADM.AUTHORIZE_DBCAPTURE DVSYS.DBMS_MACADM.UNAUTHORIZE_DBCAPTURE DVSYS.DBMS_MACADM.AUTHORIZE_DBREPLAY DVSYS.DBMS_MACADM.UNAUTHORIZE_DBREPLAY You may want to use Database Replay to evaluate the functionality and performance of any mission critical system with Database Vault. However, the execution of Database Replay on a database with Database Vault configured requires that the capture or replay user is granted appropriate access controls required by Database Vault to access data in the database. Database Vault does not rely on system and object privileges to grant access to data to users. Database Vault relies on realms with authorizations, rule sets, command rules and secure application roles to allow or deny users access to application data. In Oracle Database 18c, two new authorization types can be defined for a realm to allow capture or replay users to run captures or replays. A user is allowed to run a capture only if the user is authorized for DBCAPTURE authorization type by the Database Vault. A user is allowed to run a replay only if the user is authorized for DBREPLAY authorization type by the Database Vault. Requires DV_OWNER or DV_ADMIN role DVSYS.DBA_DV_DBCAPTURE_AUTH GRANTEE = name of the granted user DVSYS.DBA_DV_DBREPLAY_AUTH GRANTEE = name of the granted user Oracle Database 18c: New Features for Administrators
54
Authenticating and Authorizing Users with External Directories
External directories store user credentials and authorizations in a central location (LDAP- compliant directory, such as OID, OUD, and OVD). Eases administration through centralization Enables single-point authentication Eliminates the need for client-side wallets Example: User changes job roles. Security administrator changes the user roles in Oracle Virtual Directory. No changes are made to the services that the user accesses. PROD Paul Pass role_mgr sales ORCL Paul Pass role_mgr sales Authenticating and authorizing users with external directories is an important feature of Oracle Database 12c Enterprise Edition, which allows users to be defined in an external directory rather than within the databases. Their identities remain constant throughout the enterprise. Authenticating and authorizing users with external directories addresses the user, the administrative, and the security challenges by centralizing storage and management of user-related information with Enterprise User Security (EUS) relying on Oracle Directory Services such as Oracle Internet Directory (OID), Oracle Unified Directory (OUD), and Oracle Virtual Directory (OVD). When an employee changes jobs in such an environment, the administrator needs to modify information only in one location (the directory) to make changes effective in multiple databases and systems. This centralization can substantially lower administrative costs while materially improving enterprise security. Oracle external directory DN: Paul Authentication Method: Password Password: pass_paul Authorizations: role_mgr, sales Oracle Database 18c: New Features for Administrators
55
Directory metadata repository
Architecture 12c ODS / EUS Directory metadata repository DN: Ann Authentication: Password Password: pass_ann Database : ORCL Mapping schema: user_global DN: Tom Authentication: Certificate Certificate: DN_tom ….. AD 2. Checks Ann’s authentication and authorizations for ORCL In the example in the slide, a client can submit the same connect command, whether connecting as a database user or as an enterprise user. The enterprise user has the additional benefit of allowing the use of a shared schema. The authentication process is as follows: 1. The user presents a username and password (or other credentials). 2. The directory returns the authorization token to the database. The schema is mapped from the directory information. The directory supplies the global roles for the user. Enterprise roles are defined in the directory and global roles are defined in the database (non-CDB or PDB). The mapping from enterprise roles to global roles is in the directory. The directory can supply the application context. An application context supplied from the directory is called a global context. Finally, the user is connected. If the authentication and authorization must be completed with Active Directory (AD), AD must first go through Oracle Directory Services (ODS) to get the user’s authentication and authorization. Note: Each PDB has its own metadata, such as global users, global roles, and so on. Each PDB should have its own identity in the directory. ldap.ora 3. Verifies the user and applies roles DIRECTORY_SERVERS=(oidhost:13060:13130) DIRECTORY_SERVER_TYPE = OID ORCL 1. CONNECT spfile.ora user_global: IDENTIFIED GLOBALLY LDAP_DIRECTORY_ACCESS=PASSWORD | SSL LDAP_DIRECTORY_SYSAUTH=yes Client 4. Connected. Oracle Database 18c: New Features for Administrators
56
EUS and AD AD ODS / EUS ORCL
DN: CN=analyst … Authentication : Certificate Certificate: DN_ann DN: CN=trainer … Authentication : Password Password: pass_tom DN: CN=manager … Password: pass_paul DN: CN=director … Password: pass_jean ORCL Create exclusive global schemas authenticated by: PKI certificates Passwords Kerberos tickets > CREATE USER user_ann IDENTIFIED GLOBALLY AS 'CN=analyst,OU=div1,O=oracle,C=US'; > CREATE USER user_tom IDENTIFIED GLOBALLY AS 'CN=trainer,OU=div2,O=oracle,C=US'; Create shared global schemas authenticated by: > CREATE USER global_u IDENTIFIED GLOBALLY; user_ann => CN=… user_tom => CN=… Shared schema in ORCL GLOBAL_U global_u user_ann exclusive schema in ORCL user_tom exclusive schema in ORCL AD Authentication methods and certificates of users can be centralized in the directory. A user can connect to the database in two different ways. A global exclusive schema in the database that has a one-to-one schema mapping in the directory. This method requires that the user be created in every database where the end user requires access. The following command creates a database user identified by a distinguished name. The DN is the distinguished name in the user’s PKI certificate in the user’s wallet. CREATE USER global_ann IDENTIFIED GLOBALLY AS 'CN=analyst,OU=division1, O=oracle, C=US'; A global shared schema in the database that has a shared schema mapping in the directory. Any user identified to the directory can be mapped to the shared global schema in the database. The mapped user will be authenticated by the directory and the schema mapping will provide the privileges. The following command creates the global shared schema: CREATE USER global_u IDENTIFIED GLOBALLY; No one connects directly to the GLOBAL_U schema. Any user that is mapped to the GLOBAL_U schema in the directory can connect to this schema. Oracle Database 18c: New Features for Administrators
57
CMU and AD DBA_USERS DBA_ROLES EXTERNAL_NAME Deploy and synchronize database user credentials and authorizations with ODS/EUS first. Deploy database user credentials and authorizations directly in Active Directory with Centrally Managed Users (CMU): Centralized database user authentication Centralized database access authorization 12c 18c AD With Oracle Database 18c, Centrally Managed Users (CMU) allows sites to manage database user credentials and authorizations in Active Directory directly without the need to deploy and synchronize them with EUS relying on Oracle Directory Services. Active Directory stores authentication and authorization data that is used by the database to authenticate users. CMU provides the following capabilities: Supports passwords, Kerberos, and PKI certificates AD stores user database password verifiers: Passwords use verifiers as they did before. The only difference is that the verifier for a user is not stored in the database, but in AD. AD includes Kerberos Key Distribution Center: The difference is that the user is now a global user (not authenticated externally). AD verifies client Distinguished Name (DN) and may act as Certificate Authority Supports AD account policies: For passwords: Expiration, complexity, and history For lockout: Threshold, duration and reset account lockout counter For Kerberos: Ticket timeout, clock skew between server and KDC ldap.ora Users mapping: Ann Groups mapping: G-ORCL : g_AD_u MGR-ORCL : mgr_role user_ann exclusive schema in ORCL user_ann DIRECTORY_SERVERS=(oidhost:13060:13130) DIRECTORY_SERVER_TYPE = AD Shared schema in ORCL g_AD_u g_AD_u granted mgr_role spfile.ora ORCL LDAP_DIRECTORY_ACCESS= PASSWORD | SSL | PASSWORD_XS | SSL_XS LDAP_DIRECTORY_SYSAUTH=yes Global shared or exclusive schemas authenticated by: PKI certificates Passwords Kerberos tickets mgr_role global role in ORCL Oracle Database 18c: New Features for Administrators
58
Choosing Between EUS and CMU
Simplified Implementation Authentication Password, Kerberos, PKI certificates Enforce directory account policies Authorization Role authorization Administrative users Shared DB schema mapping Exclusive schema mapping Enterprise Domains Current User trusted DB link Integrated with Oracle Label Security, XDB Consolidated reporting and management of data access The following key advantages will lead you to use CMU rather than EUS: Simplified centralized directory services integration with less cost and complexity Authentication in AD for password, Kerberos and PKI Map AD groups to shared database accounts and roles Map exclusive AD user to database user Support AD account policies No client update required Supports all Oracle Database clients 10g and later EUS and Oracle Directory Services authentication and authorization work as before. Oracle Database 18c: New Features for Administrators
59
Summary In this lesson, you should have learned how to:
Create schema-only accounts Isolate a new PDB keystore Convert a PDB to run in isolated or united mode Migrate PDB keystore between keystore types Create user-defined TDE master keys Protect fixed-user database link passwords Export and import fixed-user database links with encrypted passwords Configure encryption of sensitive data in Database Replay files Perform Database Replay capture and replay in a database with Database Vault Explain Enterprise users integration with Active Directory Oracle Database 18c: New Features for Administrators
60
Introduction Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Conclusion 1 2 3 4 5 6 7
61
Objectives After completing this lesson, you should be able to:
Reuse preplugin backups after conversion of a non-CDB to a PDB Reuse preplugin backups after plugging/relocating a PDB into another CDB Duplicate an active PDB into an existing CDB Duplicate a CDB as encrypted Duplicate a CDB as decrypted Recover a standby database from primary Oracle Database 18c: New Features for Administrators
62
Migrating a Non-CDB to a CDB
Possible methods: Data Pump (TTS or TDB or full export/import) Plugging (XML file definition with DBMS_PDB) Cloning Replication After conversion: Is it possible to recover the PDB back in time before the non-CDB was converted? Are the non-CDB backups transported with the non-CDB? Control files Redo Log files CDB1 CDB root Data files / Tempfiles PDB2 Data files / Tempfiles Create PDB2 from non-CDB ORCL impdp TTS Plug Clone Dump file XML file Data files Replication In Oracle Database 12c, there are different possible methods to migrate a non-CDB to a CDB. Whichever method is used, are the non-CDB backups transported with the non-CDB during the migration? Does Oracle Data Pump transport the non-CDB backups? You either use transportable tablespace (TTS) or full conventional export/import or full transportable database (TDB) provided that in the last one any user-defined object resides in a single user-defined tablespace. Using any of these Data Pump methods, Data Pump transports objects definition and data, but not backups. Does the plugging method transport the non-CDB backups? Generating an XML metadata file from the non-CDB to use it during the plugging step into the CDB only describes the non-CDB data files, but it does not describe the list of backups associated to the non-CDB. Does the cloning method transport the non-CDB backups? Cloning non-CDBs in a CDB requires copying the files of the non- CDB to a new location. It does not copy the backups to a recovery location. Does replication transport the non-CDB backups? The replication method replicates the data from a non-CDB to a PDB. When the PDB catches up with the non-CDB, you fail over to the PDB. Backups are not associated with the replication. Because there are no backups transported with the non-CDB into the target CDB, no restore nor recovery using the old backups is possible. Even if the non-CDB backups were manually transported/copied to the target CDB, users cannot perform restore/recover operations using these backups. You had to create a new baseline backup for the CDB converted as a PDB. expdp TTS Unplug using DBMS_PDB non-CDB ORCL Datafiles Control files Redo log files Oracle Database 18c: New Features for Administrators
63
Migrating a Non-CDB and Transporting Non-CDB Backups to a CDB
Export backups metadata with DBMS_PDB.exportRmanBackup. Unplug the non-CDB using DBMS_PDB.describe. Archive the current redo log file. Transfer data files including backups to the target CDB. Plug using the XML file. Execute the noncdb_to_pdb.sql script. Open PDB. This automatically imports backups metadata into the CDB dictionary. Restore/recover the PDB with preplugin backups: Catalog the archived redo log file. Restore PDB using preplugin backups. Recover PDB using preplugin backups. CDB1 CDB root Data files / Tempfiles Archive log files Backups 7 Control files Redo Log files PDB2 Data files / Tempfiles Open PDB2 7 5 6 4 Plug as PDB2 Backups 1 XML file containing backup metadata XML file Archive log files In Oracle Database 18c, you can transport the existing backups and backed up archive log files of the non-CDB and reuse them to restore and recover the new PDB. The backups transported from the non-CDB into the PDB are called preplugin backups. Transporting the backups and backed up archive log files associated to a non-CDB before migration requires the following steps: The following new DBMS_PDB.exportRmanBackup procedure must be executed in the non-CDB opened in read/write mode. This is a mandatory step for non-CDB migration. The procedure exports all RMAN backup metadata that belongs to the non-CDB into its own dictionary. The metadata is transported along with the non-CDB during the migration. Use dbms_pdb.describe to generate an XML metadata file from the non-CDB describing the structure of the non-CDB with the list of datafiles. Archive the current redo log file required for a potential restore/recover using preplugin backups. Transfer the data files, backups, and archive log files to the target CDB. Use the XML metadata file during the plugging step to create the new PDB into the CDB. Run the ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql script to delete unnecessary metadata from the PDB SYSTEM tablespace. Open the PDB. When the PDB is open, the exported backup metadata is automatically copied into the CDB dictionary, except the current redo log file archived in step 3. Catalog the archived redo log file as one of the preplugin archived logs. Because the backups for the PDB are now available in the CDB, they can be reused to recover the PDB. 3 2 DBMS_PDB.exportRmanBackup Datafiles DBMS_PDB.describe non-CDB ORCL Datafiles Control files Redo log files Oracle Database 18c: New Features for Administrators
64
Relocating/Plugging a PDB into Another CDB
After relocating/plugging the PDB into another CDB: Is it possible to recover the PDB back in time before it was relocated/unplugged? Are the PDB backups transported with the relocated/unplugged PDB? CDB root Data files / Tempfiles Control files Redo Log files Data files / Tempfiles PDB2 Open PDB2 4 3 3 Plug In Oracle Database 12c, PDBs can be hot cloned from one CDB into another CDB by using local UNDO tablespaces. Are the PDB backups transported with the PDB during the cloning? Cloning a PDB into another CDB requires copying the files of the PDB to a new location. It does not copy the backups to a recovery location. If there are no backups transported with the PDB into the target CDB, no restore nor recovery using the old backups is possible. Even if the PDB backups were manually transported/copied to the target CDB, users cannot perform restore/recover operations using these backups. You had to create a new baseline backup for PDBs relocated or plugged. XML file Datafiles Unplug using DBMS_PDB 2 CDB1 CDB root Data files / Tempfiles Archive log files Backups Redo Log files Control files PDB1 Data files / Tempfiles Close PDB1 1 Oracle Database 18c: New Features for Administrators
65
Plugging a PDB and Transporting PDB Backups to a CDB - 1
CDB root Data files / Tempfiles Export backups metadata by using DBMS_PDB.exportRmanBackup. Unplug the PDB by using DBMS_PDB.describe. Transfer the data files including backups to the target CDB. Plug using the XML file. Open PDB. This automatically imports backups metadata into the CDB dictionary. Then you can restore/recover the PDB by using the transported backups: Restore PDB using preplugin backups. Recover PDB using preplugin backups. Archive log files Backups 4 Control files Redo Log files PDB2 Data files / Tempfiles 4 Open PDB2 3 In Oracle Database 18c, you can transport the existing backups and backed up archive log files of the PDB and reuse them to restore and recover the new plugged PDB. To transport the backups and backed up archive log files associated to a PDB before replugging the PDB, perform the following steps: The following new DBMS_PDB.exportRmanBackup procedure can be executed in the PDB opened in read/write mode. This is not a mandatory step before unplugging the PDB: SQL> EXEC dbms_pdb.exportRmanBackup ('<pdb_name>') The procedure exports all RMAN backup metadata that belongs to the PDB into its own dictionary. The metadata is transported along with the PDB during the unplug operation. Unplug the PDB. Transfer the data files, backups, and archive log files to the target CDB. Plug the PDB with the COPY clause to copy the data files, backups, and backed up archive log files of the source PDB into a new directory. Open the new PDB. When the PDB is open, the exported backup metadata is automatically copied into the CDB dictionary. Because the backups for the PDB are now available in the CDB, they can be reused to recover the PDB. 3 Plug as PDB2 Backups XML file containing PDB backup metadata XML file 1 Archive log files DBMS_PDB.exportRmanBackup DBMS_PDB.describe Datafiles 2 Unplug PDB1 CDB1 PDB1 Control files Redo log files Data files / Tempfiles Oracle Database 18c: New Features for Administrators
66
Plugging a PDB and Transporting PDB Backups to a CDB - 2
Unplug the PDB with DBMS_PDB.describe. Transfer the datafiles including backups to the target CDB. Plug using the XML file. Open PDB. Catalog preplugin backups into CDB. Then you can restore/recover the PDB using the transported backups: Restore PDB using preplugin backups. Recover PDB using preplugin backups. CDB2 CDB root Data files / Tempfiles Archive log files Backups 4 Control files Redo Log files PDB2 Data files / Tempfiles 3 Open PDB2 2 If you forgot to execute the DBMS_PDB.exportRmanBackup procedure before unplugging the PDB, you can still catalog the existing backups and backed up archive log files of the plugged PDB after the plugging operation and reuse them to restore and recover the plugged PDB. If preplugin backups and archive log files are moved or new backups and archive log files were created on the source CDB after the PDB was transported, then the target CDB does know about them. You can catalog those preplugin files: RMAN> SET PREPLUGIN CONTAINER=<pdb_name>; RMAN> CATALOG PREPLUGIN ARCHIVELOG '<archivelog>'; RMAN> CATALOG PREPLUGIN BACKUP '<backup_name>'; 2 Plug as PDB2 XML file Backups Archive log files DBMS_PDB.describe 1 Unplug PDB1 CDB1 PDB1 Control files Redo log files Data files / Tempfiles Oracle Database 18c: New Features for Administrators
67
Using PrePlugin Backups
Use the PrePlugin option to perform RMAN operations using preplugin backups. Restore a PDB from its preplugin backups cataloged in the target CDB. Recover a PDB from its preplugin backups until the datafile was plugged in. Check whether preplugin backups and archive log files are cataloged in the target CDB. Verify that cataloged preplugin backups are available on disk. RMAN> RESTORE PLUGGABLE DATABASE pdb_noncdb FROM PREPLUGIN; RMAN> RECOVER PLUGGABLE DATABASE pdb_noncdb FROM PREPLUGIN; RMAN> SET PREPLUGIN CONTAINER pdb1; RMAN> LIST PREPLUGIN BACKUP; RMAN> LIST PREPLUGIN ARCHIVELOG ALL; RMAN> LIST PREPLUGIN COPY; A restore operation from preplugin backups restores the datafiles taken before the PDB was plugged in. A recover operation using preplugin backups use preplugin incremental backup and archivelogs to recover the datafile until the datafile was plugged in. At the end of the recover operation, the datafile is checkpointed as of plugin SCN. The preplugin archivelogs are restored to the target archivelog destination by default as long as the target archivelog destination is not a fast recovery area (FRA). If the target archivelog destination is the FRA, then the user has to provide an explicit archivelog destination using the SET ARCHIVELOG DESTINATION command before executing RECOVER FROM PREPLUGIN. If there are preplugin metadata that belongs to more than one PDB, a command that does not clarify the PDB the syntax refers to errors out indicating that the user should scope the PDB. The scoping of PDB can be done by using the SET PREPLUGIN CONTAINER command. Scoping is not necessary if the user has connected to PDB as the target CDB. The SET PREPLUGIN CONTAINER command is necessary if you connected to the target CDB. CROSSCHECK, DELETE, and CHANGE commands can use the PREPLUGIN option. The CROSSCHECK command can validate the existence of preplugin backups, archived log files, and image copies. The DELETE command can delete preplugin backups, archived log files and image copies, and also expired preplugin backups. RMAN> DELETE EXPIRED PREPLUGIN BACKUP; RMAN> CHANGE PREPLUGIN archivelog all unavailable; RMAN> CHANGE PREPLUGIN backup available; RMAN> CHANGE PREPLUGIN copy unavailable; RMAN> CROSSCHECK PREPLUGIN BACKUP; RMAN> DELETE PREPLUGIN BACKUP; Oracle Database 18c: New Features for Administrators
68
To Be Aware Of The source and destination CDBs must have COMPATIBLE set to 18.1 or higher to create/restore/recover preplugin backups. In case of plugging in a non-CDB, the non-CDB must use ARCHIVELOG mode. The target CDB does not manage preplugin backups. Use CROSSCHECK and DELETE commands to manage the preplugin backups. A RESTORE using preplugin backups can restore datafiles from one PDB only. Backups taken by the source cdb1 are visible in target cdb2 only. The target CDB does not manage the source database backups. However, there are commands to delete and crosscheck the source database backups. In one RMAN command, you cannot specify datafiles belonging to different PDBs when using preplugin backups. The CDB root must be opened to make use of preplugin backups. Backups taken by the source cdb1 are visible in target cdb2 only. For instance, a PDB can migrate from cdb1 to cdb2 and then to cdb3. The backups of the PDB taken at cdb1 are accessible by cdb2. They are not accessible by cdb3. The cdb3 can only see backups of the PDB taken by cdb2. cdb1 cdb2 cdb3 Unplug PDB1 Unplug PDB2 PDB1 Plug as PDB2 PDB2 Plug as PDB3 PDB3 Archive log files Archive log files Archive log files Backups Backups Backups Oracle Database 18c: New Features for Administrators
69
Example RMAN> SET PREPLUGIN CONTAINER pdb1;
RMAN> CATALOG PREPLUGIN ARCHIVELOG '/u03/app/…/o1_mf_1_8_dnqwm59v_.arc'; RMAN> RUN { RESTORE PLUGGABLE DATABASE pdb1 FROM PREPLUGIN; RECOVER PLUGGABLE DATABASE pdb1 FROM PREPLUGIN; } RMAN> RECOVER PLUGGABLE DATABASE pdb1; In this example, you first avoid any ambiguity to which PDB the backups belong to by scoping the PDB. Then you catalog the last archive log file created after the PDB was unplugged and the metadata exported. Then you restore and recover the PDB using preplugin backups. And finally, you run a normal media recovery after recovering from preplugin backups. Oracle Database 18c: New Features for Administrators
70
Cloning Active PDB into Another CDB Using DUPLICATE
Use the DUPLICATE command to create a copy of a PDB or subset of a PDB. Duplicate a CDB or PDBs or PDB tablespaces in active mode to a fresh auxiliary instance. Duplicate a PDB or PDB tablespaces in active mode to an existing opened CDB. Set the COMPATIBLE initialization parameter to 18.1. Clone only one PDB at a time. Set the destination CDB in RW mode. Set the REMOTE_RECOVERY_FILE_DEST initialization parameter in the destination CDB to the location where to restore foreign archive log files. 12c RMAN> DUPLICATE TARGET DATABASE TO cdb1 PLUGGABLE DATABASE pdb1b; In Oracle Database 12c, to duplicate PDBs, you must create the auxiliary instance as a CDB. To do so, start the instance with the declaration enable_pluggable_database=TRUE in the initialization parameter file. When you duplicate one or more PDBs, RMAN also duplicates the CDB root and the CDB seed. The resulting duplicate database is a fully new, functional CDB that contains the CDB root, the CDB seed, and the duplicated PDBs. In Oracle Database 18c, the destination instance acts the auxiliary instance. An active PDB can be duplicated directly into an open CDB. The passwords for target and auxiliary connections must be the same when using active duplicate. In the auxiliary instance, define the location where to restore the foreign archive log files via the new initialization parameter, REMOTE_RECOVERY_FILE_DEST. RMAN should be connected to the CDB root of the target and auxiliary instances. Limitations Non-CDB to PDB duplication is not supported. Encryption is not supported for PDB cloning. SPFILE, NO STANDBY, FARSYNC STANDBY, LOG_FILE_NAME_CONVERT keywords are not supported. NORESUME, DB_FILE_NAME_CONVERT, SECTION SIZE, USING COMPRESSED BACKUPSET keywords are supported. 18c RMAN> DUPLICATE PLUGGABLE DATABASE pdb1 AS pdb2 FROM ACTIVE DATABASE DB_FILE_NAME_CONVERT ('cdb1', 'cdb2'); Oracle Database 18c: New Features for Administrators
71
Example: 1 To duplicate pdb1 from CDB1 into CDB2:
Set the REMOTE_RECOVERY_FILE_DEST initialization parameter in CDB2. Connect to the source (TARGET for DUPLICATE command): CDB1 Connect to the existing CDB2 that acts as the auxiliary instance: Start duplicate. SQL> ALTER SYSTEM SET REMOTE_RECOVERY_FILE_DEST='/dir_to_restore_archive log files'; The example shows a duplication of pdb1 from cdb1 into the existing cdb2 as pdb1. To perform this operation, connections to the source (TARGET) cdb1 and to the destination (AUXILIARY) cdb2 are required. The location where to restore the foreign archive log files in the auxiliary instance is defined via the new initialization parameter, REMOTE_RECOVERY_FILE_DEST. Then the DUPLICATE command defines that the operation is performed while the source pdb1 is opened. cdb2 needs to be opened in read/write. rman TARGET AUXILIARY CDB1 CDB2 DUPLICATE pdb1 pdb1 pdb1 RMAN> DUPLICATE PLUGGABLE DATABASE pdb1 TO cdb2 FROM ACTIVE DATABASE; Oracle Database 18c: New Features for Administrators
72
Example: 2 To duplicate pdb1 from CDB1 into CDB2:
Set the REMOTE_RECOVERY_FILE_DEST initialization parameter in CDB2. Connect to the source (TARGET for DUPLICATE command): CDB1 Connect to the existing CDB2 that acts as the auxiliary instance: Start duplicate. SQL> ALTER SYSTEM SET REMOTE_RECOVERY_FILE_DEST='/dir_to_restore_archive log files'; The example shows a duplication of pdb1 from cdb1 into the existing cdb2 as pdb2. To perform this operation, connections to the source (TARGET) cdb1 and to the destination (AUXILIARY) cdb2 are required. The location where to restore the foreign archive log files in the auxiliary instance is still defined via the new initialization parameter, REMOTE_RECOVERY_FILE_DEST. Then the DUPLICATE command defines that the operation is performed while the source pdb1 is opened. cdb2 needs to be opened read/write. rman TARGET AUXILIARY CDB1 CDB2 DUPLICATE pdb1 pdb1 pdb2 RMAN> DUPLICATE PLUGGABLE DATABASE pdb1 AS pdb2 TO cdb2 FROM ACTIVE DATABASE; Oracle Database 18c: New Features for Administrators
73
Duplicating On-Premise CDB as Cloud Encrypted CDB
Duplicating an on-premise CDB to the Cloud: Any newly created tablespace is encrypted in the Cloud CDB. The Cloud CDB holds a keystore because this is the default behavior on Cloud. All forms of normal duplication are compatible: Active duplication Backup-based duplication Targetless duplicate ENCRYPT_NEW_TABLESPACES = CLOUD_ONLY SQL> CREATE TABLESPACE … If you decide to migrate an on-premise CDB to the Cloud, any tablespace created in the Cloud CDB will be encrypted, even if no encryption clause is declared. Oracle Database 12c allows encryption of new user-defined tablespaces via a new ENCRYPT_NEW_TABLESPACES instance parameter. A user-defined tablespace that is created in a CDB in the Cloud is transparently encrypted with Advanced Encryption Standard 128 (AES 128) even if the ENCRYPTION clause for the SQL CREATE TABLESPACE statement is not specified, and the ENCRYPT_NEW_TABLESPACES instance parameter is being set to CLOUD_ONLY by default. A user-defined tablespace that is created in an on-premise database is not transparently encrypted. Only the ENCRYPTION clause of the CREATE TABLESPACE statement determines if the tablespace is encrypted. All forms of duplication are compatible except for standby duplicate. Active duplication connects as TARGET to the source database and as AUXILIARY to the Cloud instance. Backup-based duplication without a target connection connects as CATALOG to the recovery catalog database and as AUXILIARY to the Cloud instance. RMAN uses the metadata in the recovery catalog to determine which backups or copies are required to perform the duplication. On-premise Database ORCL Database Cloud Service database ORCL No encrypted tablespaces Mandatory Encryption No ENCRYPTION clause Oracle Database 18c: New Features for Administrators
74
Duplicating On-Premise Encrypted CDB as Cloud Encrypted CDB
Duplicating an on-premise CDB with encrypted tablespaces to the Cloud: Tablespaces of the source CDB need to be decrypted. Restored tablespaces are re-encrypted in the Cloud CDB. Requires the master TDE key from the source CDB keystore Requires the source keystore to be copied and opened at the destination CDB Copy Encrypted tablespaces If the source database already contains encrypted tablespaces, the DUPLICATE must have access to the TDE master key of the source (TARGET) database because the clone instance needs to decrypt the datafiles before re-encrypting them during the restore operation. In this case, the keystore has to be copied from the on-premise CDB to the clone instance before starting the DUPLICATE and must be opened. The DUPLICATE command allows the new ‘AS ENCRYPTED’ clause to restore the CDB with encryption. For more information about duplicating databases to Oracle Cloud Infrastructure, refer to “Duplicating Databases to Oracle Cloud Oracle“ in Database Backup and Recovery User’s Guide 18c and also RMAN Duplicate from an Active Database ( 1.oraclecloud.com/Content/Database/Tasks/mig-rman-duplicate-active- database.htm#RMANDUPLICATEfromanActiveDatabase) On-premise Database ORCL ORCL Keystore Mandatory Encryption Database Cloud Service database ORCL RMAN> SET DECRYPTION WALLET OPEN IDENTIFIED BY password; RMAN> DUPLICATE DATABASE TO orcl FROM ACTIVE DATABASE AS ENCRYPTED; Oracle Database 18c: New Features for Administrators
75
Duplicating Cloud Encrypted CDB as On-Premise CDB
Tablespaces of the source CDB are necessarily encrypted. Restored tablespaces need to be decrypted to be created: Requires the TDE master key from the source CDB keystore Requires the source keystore to be copied and opened at the destination CDB Copy Encrypted tablespaces Optional Encryption ORCL Keystore On-premise Database ORCL Database Cloud Service database ORCL The source database already contains encrypted tablespaces; therefore, DUPLICATE must have access to the master key of the source (TARGET) database because the clone instance needs to decrypt the datafiles before the restore operation. In this case, the keystore has to be copied from the Cloud CDB to the clone instance before starting DUPLICATE and must be opened by using the SET DECRYPTION WALLET OPEN IDENTIFIED BY 'password' command. The DUPLICATE command uses the ‘AS DECRYPTED’ clause to restore the CDB without encryption. If the user does not have Advanced Security Option (ASO) license on on-premise side, the on-premise database cannot have TDE encrypted tablespaces. The DUPLICATE command using the ‘AS DECRYPTED’ clause provides a way to get encrypted tablespaces/databases from Cloud to on-premise servers. It is important to note that it will not decrypt tablespaces that were explicitly created with encryption, using the ENCRYPTION USING clause. For more information, refer to "Duplicating an Oracle Cloud Database as an On-premise Database" in Oracle Database Backup and Recovery User’s Guide 18c. RMAN> SET DECRYPTION WALLET OPEN IDENTIFIED BY password; RMAN> DUPLICATE DATABASE TO orcl FROM ACTIVE DATABASE AS DECRYPTED; Oracle Database 18c: New Features for Administrators
76
Automated Standby Synchronization from Primary
A standby database might lag behind the primary for various reasons like: Unavailability or insufficient network bandwidth between primary and standby database Unavailability of standby database Corruption/accidental deletion of archive redo data on primary Manually restore the primary controlfile on standby after use of RECOVER FROM SERVICE. RECOVER FROM SERVICE automatically rolls a standby forward: Remember all datafile names on the standby. Restart standby in nomount. Restore controlfile from primary. Mount standby database. Rename datafiles from stored standby names. Restore new datafiles to new names. Recover standby. 12c In Oracle Database 12c, the RECOVER … FROM SERVICE command refreshes the standby data files and rolls them forward to the same point-in-time as the primary. However, the standby control file still contains old SCN values, which are lower than the SCN values in the standby data files. Therefore, to complete the synchronization of the physical standby database, you must refresh the standby control file to update the SCN#. Therefore, you have to place the physical standby database in NOMOUNT mode and restore using the control file of the primary database to standby. The automation in Oracle Database 18c performs the following steps: Remember all datafile names on the standby. Restart standby in nomount. Restore controlfile from primary. Mount standby database. Rename datafiles from stored standby names. Restore new datafiles to new names. Recover standby. 18c Oracle Database 18c: New Features for Administrators
77
Summary In this lesson, you should have learned how to:
Reuse preplugin backups after conversion of a non-CDB to a PDB Reuse preplugin backups after plugging/relocating a PDB into another CDB Duplicate an active PDB into an existing CDB Duplicate a CDB as encrypted Duplicate a CDB as decrypted Recover a standby database from primary Oracle Database 18c: New Features for Administrators
78
Introduction Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Conclusion 1 2 3 4 5 6 7
79
Objectives After completing this lesson, you should be able to:
Manage private temporary tables Use the Data Pump Import CONTINUE_LOAD_ON_FORMAT_ERROR option of the DATA_OPTIONS parameter Perform online modification of partitioning and subpartitioning strategy Perform online MERGE partition and subpartition Generate batched DDL by using the DBMS_METADATA package Benefit from Unicode 9.0 support Oracle Database 18c: New Features for Administrators
80
Global Temporary Tables
12c The definition of global temporary tables is visible by all sessions. Each session can see and modify only its own data. Global temporary tables retain data only for the duration of a transaction or session. DML locks are not acquired on the data. You can create indexes, views, and triggers on global temporary tables. Global temporary tables are created by using the GLOBAL TEMPORARY clause. ACC_TMP ACC_TMP ACC_TMP Temporary tables can be created to hold session-private data that exists only for the duration of a transaction or session. The CREATE GLOBAL TEMPORARY TABLE command creates a temporary table that can be transaction specific or session specific. For transaction-specific temporary tables, data exists for the duration of the transaction, whereas for session-specific temporary tables, data exists for the duration of the session. Data in a session is private to the session. Each session can see and modify only its own data. DML locks are not acquired on the data of the temporary tables. The clauses that control the duration of the rows are: ON COMMIT DELETE ROWS: To specify that rows are visible only within the transaction. This is the default. ON COMMIT PRESERVE ROWS: To specify that rows are visible for the entire session You can create indexes, views, and triggers on temporary tables and you can also use the Export and Import utilities to export and import the definition of a temporary table. However, no data is exported, even if you use the ROWS option. The definition of a temporary table is visible to all sessions. SQL> CREATE GLOBAL TEMPORARY TABLE hr.employees_temp AS SELECT * FROM hr.employees; Oracle Database 18c: New Features for Administrators
81
Private Temporary Tables
18c USER_PRIVATE_TEMP_TABLES Private Temporary Tables (PTTs) exist only for the session that creates them. You can create a PTT with the CREATE PRIVATE TEMPORARY TABLE statement. Table name must start with ORA$PTT_ : The CREATE PRIVATE TEMPORARY TABLE statement does not commit a transaction. Two concurrent sessions may have a PTT with the same name but different shape. PTT definition and contents are automatically dropped at the end of a session or transaction. PRIVATE_TEMP_TABLE_PREFIX = ORA$PTT_ SQL> CREATE PRIVATE TEMPORARY TABLE ORA$PTT_mine (c1 DATE, … c3 NUMBER(10,2)); Private Temporary Tables (PTTs) are local to a specific session. In contrast with Global Temporary Tables, the definition and contents are local to the creating session only and are not visible to other sessions. There are two types of duration for the created PTTs. Transaction: The PTT is automatically dropped when the transaction in which it was created ends with either a ROLLBACK or COMMIT. This is the default behavior if no ON COMMIT clause is defined at PTT creation. Session: The PTT is automatically dropped when the session that created it ends. This is the behavior if the ON COMMIT PRESERVE DEFINITION clause is defined at the PTT creation. A PTT must be named with a prefix 'ORA$PTT_'. The prefix is defined by default by the PRIVATE_TEMP_TABLE_PREFIX initialization parameter, modifiable at the instance level only. Creating a PTT does not commit the current transaction. Since it is local to the current session, a concurrent user may also create a PTT with the same name but having a different shape. At this time, PTTs cannot include User Defined Types, constraints, column default values, object types or XML types, or an identity clause. PTTs must be created in the user schema. Creating a PTT in another schema, using the ALTER SESSION SET CURRENT SCHEMA command, is not allowed. ORA$PTT_mine ORA$PTT_mine SQL> CREATE PRIVATE TEMPORARY TABLE ORA$PTT_mine (c1 DATE …) ON COMMIT PRESERVE DEFINITION; SQL> DROP TABLE ORA$PTT_mine; Oracle Database 18c: New Features for Administrators
82
Import with the CONTINUE_LOAD_ON_FORMAT_ERROR option
When import detects a format error in the data stream, it aborts the load. All table data for the current operation is rolled back. Solution: Either re-export and re-import or recover as much of the data as possible from the file with this corruption. Importing with the CONTINUE_LOAD_ON_FORMAT_ERROR option: Detects a format error in the data stream while importing data Instead of aborting the import operation, resumes loading data at the next granule boundary Recovers at least some data from the dump file Is ignored for network mode import 12c 18c In Oracle Database 12c, when a stream format error is detected, Data Pump import aborts and all the rows already loaded are rolled back. Oracle Database 18c introduces a new value for the DATA_OPTIONS parameter for impdp. When a stream format error is detected and the CONTINUE_LOAD_ON_FORMAT_ERROR option is specified for the DATA_OPTIONS parameter for impdp, the Data Pump jumps ahead and continue loading from the next granule. Oracle Data Pump has a directory of granules for the data stream for a table or partition. Each granule has a complete set of rows. Data for a row does not cross granule boundaries. The directory is a list of offsets into the stream of where a new granule, and therefore, a new row, begins. Any number of stream format errors may occur. Each time, loading resumes at the next granule. Using this parameter for a table or partition that has stream format errors means that rows from the export database will not be loaded. This could be hundreds or thousands of rows. Nevertheless, all rows that do not present stream format errors are loaded which could be hundreds or thousands of rows. The DATA_OPTIONS parameter for DBMS_DATAPUMP.SET_PARAMETER has a new flag to enable this behavior: KU$_DATAOPT_CONT_LD_ON_FMT_ERR. $ impdp hr TABLES = employees DUMPFILE = dpemp DIRECTORY = dirhr DATA_OPTIONS = CONTINUE_LOAD_ON_FORMAT_ERROR Oracle Database 18c: New Features for Administrators
83
Online Partition and Subpartition Maintenance Operations
Improve the high-availability of data by supporting an online implementation of a number of frequently used DDLs: CREATE INDEX / ALTER TABLE ADD COLUMN | ADD CONSTRAINT DROP INDEX / ALTER INDEX UNUSABLE / ALTER TABLE DROP CONSTRAINT | SET COLUMN UNUSED | MOVE | MOVE PARTITION | SPLIT PARTITION | MODIFY nonpartitioned to partitioned | MOVE PARTITION INCLUDING ROWS ALTER TABLE MODIFY to allow repartitioning a table, add or remove subpartitioning ALTER TABLE MERGE PARTITION 11g 12c Beginning with Oracle Database 12c, you can use the new ONLINE keyword to allow the execution of DML statements during the following DDL operations: DROP INDEX ALTER TABLE DROP CONSTRAINT ALTER INDEX UNUSABLE ALTER TABLE SET COLUMN UNUSED ALTER TABLE MOVE ALTER TABLE MODIFY/SPLIT PARTITION This enhancement enables simpler application development, especially for application migrations. There are no application disruptions for schema maintenance operations. To change the partitioning method of a table, you had to either use DBMS_REDEFINITION procedures or do it manually with CTAS. In Oracle Database 18c, you can change a partitioning method online, for example, convert the HASH method to the RANGE method, or add or remove subpartitioning to a partitioned table to reflect a new workload and for more manageability of data. Repartitioning a table can lead to better performance like changing the partitioning key to get more partition pruning. This avoids a big down time during the conversion of large partitioned tables. The ALTER TABLE MODIFY command supports a completely nonblocking DDL to repartition a table. SQL> ALTER INDEX hr.i_emp_ix UNUSABLE ONLINE; SQL> ALTER TABLE sales MODIFY PARTITION BY RANGE (c1) INTERVAL (100) (PARTITION p1 …, PARTITION p2 …) ONLINE UPDATE INDEXES; 18c 18c Oracle Database 18c: New Features for Administrators
84
Online Modification of Partitioning and Subpartitioning Strategy
Prevents concurrent DDLs on the affected table, until the operation completes ONLINE clause: Does not hold a blocking X DML lock on the table being modified No tablespace defined for the partitions; defaults to the original table’s tablespace The UPDATE INDEXES clause: Changes the partitioning state of indexes and storage properties of the indexes being converted Cannot change the columns on which the original list of indexes are defined Cannot change the uniqueness property of the index or any other index property No tablespace defined for indexes: Local indexes after the conversion collocate with the table partition. Global indexes after the conversion reside in the same tablespace of the original global index on the nonpartitioned table. The ONLINE modification of a partitioned table prevents concurrent DDLs on the affected table, until the operation completes, but it does not hold an exclusive blocking DML lock on the table. If the ONLINE clause is not mentioned, the DDL operation holds a blocking exclusive DML lock on the table being modified. If the user does not specify the tablespace defaults for the partitions, the partitions of the repartitioned table default to the original table’s tablespace. The UPDATE INDEXES clause can be used to change the partitioning state of indexes and storage properties of the indexes being converted. The columns on which the original list of indexes are defined cannot be changed. This clause cannot change the uniqueness property of the index or any other index property. If no partitioning is defined for existing indexes of the original table by using the UPDATE INDEXES clause, the following defaulting behavior applies for all unspecified indexes: Global indexes that are prefixed by the partitioning keys are converted to local partitioned indexes. Local indexes are retained as local partitioned indexes if they are prefixed by the partitioning keys in either the partitioning or subpartitioning dimension. All indexes that are nonprefixed by the partitioning keys are converted to global indexes. Because partitioned bitmap indexes can only be local, bitmap indexes are always local irrespective of their prefixed column behavior. All auxiliary structures, such as triggers, constraints, and Virtual Private Database (VPD) predicates associated to the table, are retained exactly on the partitioned table as well. This modification operation is not supported for IOTs, nor on tables in presence of domain indexes. Oracle Database 18c: New Features for Administrators
85
Online Modification of Subpartitioning Strategy: Example
Before online conversion: Online conversion: SQL> CREATE TABLE sales (prodno NUMBER NOT NULL, custno NUMBER, time_id DATE, … qty_sold NUMBER(10,2), amt_sold NUMBER(10,2)) PARTITION BY RANGE (time_id) (PARTITION s_q1_17 VALUES LESS THAN (TO_DATE('01-APR-2017','dd-MON-yyyy')), PARTITION s_q2_17 VALUES LESS THAN (TO_DATE('01-JUL-2017','dd-MON-yyyy')), …); In the example in the slide, the user changes the partitioning method of the range-partitioned SALES table into a range table subpartitioned by hash and also the state of the existing indexes on the table. This modification operation is completely nonblocking because the ONLINE keyword is specified. The operation subpartitions each partition of the SALES table into eight hash partitions set on the new subpartitioning key, CUSTNO. Each partition of the range local partitioned index I1_CUSTNO is hash subpartitioned into eight subpartitions. The unique index I2_TIME_ID is maintained as a global range partitioned unique index with no subpartitioning. All unspecified indexes whose index columns are a prefix of the new subpartitioning key are automatically converted to a local partitioned index. Other indexes are kept as global nonpartitioned indexes, such as I3_PRODNO. All auxiliary structures on the table being modified, such as triggers, constraints, VPDs and others, are retained on the partitioned table as well. SQL> CREATE INDEX i1_custno ON sales (custno) LOCAL; SQL> CREATE UNIQUE INDEX i2_time_id ON sales (time_id); SQL> CREATE INDEX i3_prodno ON sales (prodno); SQL> ALTER TABLE sales MODIFY PARTITION BY RANGE (time_id) SUBPARTITION BY HASH (custno) SUBPARTITIONS 8 (PARTITION s_q1_17 VALUES LESS THAN (TO_DATE('01-APR-2017','dd-MON-yyyy')), PARTITION s_q2_17 VALUES LESS THAN (TO_DATE('01-JUL-2017','dd-MON-yyyy')), …) ONLINE UPDATE INDEXES (i1_custno LOCAL, i2_time_id GLOBAL PARTITION BY RANGE (time_id) ( PARTITION ip1 VALUES LESS THAN (MAXVALUE))); Oracle Database 18c: New Features for Administrators
86
Online MERGE Partition and Subpartition: Example
Before partition merging operation: Online partition merging operation: The online operation can also be performed on subpartitions. SQL> CREATE TABLE sales (prod_id NUMBER, cust_id NUMBER, time_id DATE, channel_id NUMBER, promo_id NUMBER, quantity_sold NUMBER(10,2), amount_sold NUMBER(10,2)) PARTITION BY RANGE (time_id) INTERVAL (100) ( PARTITION p1 VALUES LESS THAN (100), PARTITION p2 VALUES LESS THAN (500)); The ONLINE partition maintenance operation, such as merging partitions of a partitioned table, prevents concurrent DDLs on the affected (sub) partitions, until the operation completes. It also does not acquire/hold a blocking exclusive DML lock on the (sub) partitions being merged, even if it is only for a short duration. If the ONLINE clause is not mentioned, the DDL operation holds a blocking exclusive DML lock on the table being modified. In the example in the slide, the user merges three partitions (January 2017, February 2017, and March 2017) of the partitioned SALES table into the q1_2017 (First quarter 2017) partition. This operation is completely nonblocking because the ONLINE keyword is specified. The I1_EMPNO unique index is maintained as a local partitioned index. The I2_MGR index is maintained as a global partitioned index. The same online merging operation can be executed on subpartitions. This maintenance operation is not supported for IOTs, nor on tables in presence of domain indexes. SQL> CREATE INDEX i1_time_id ON sales (time_id) LOCAL TABLESPACE tbs_2; SQL> CREATE INDEX i2_promo_id ON sales (promo_id) GLOBAL TABLESPACE tbs_2; SQL> ALTER TABLE sales MERGE PARTITIONS jan17,feb17,mar17 INTO PARTITION q1_17 COMPRESS UPDATE INDEXES ONLINE; Oracle Database 18c: New Features for Administrators
87
Batched DDL from DBMS_METADATA Package
The DBMS_METADATA.SET_TRANSFORM_PARAM procedure identifies differences between two tables. Generates one ALTER TABLE statement for every difference found One single ALTER TABLE statement for all the differences related to scalar columns No change in behavior for LOB and complex types ALTER TABLE simplified patches 12c 18c Enterprise class application tables are typically very large and have a large number of columns. In Oracle Database 12c, when comparing two application tables with different column lists, DBMS_METADATA detects columns that need to be added or dropped and subsequently generates one ALTER TABLE for each ADD or DROP column. When there is a large number of columns to be added, there is a significant performance impact when executing a large number of ALTER TABLE statements. The performance impact resulting from executing ALTER TABLE ADD | DROP column statements can be mitigated by batching the commands and collectively adding or dropping the new columns with a single ALTER TABLE ADD | DROP column DDL statement. Previous behavior: ALTER TABLE app1.big_tab1 ADD (COL3 VARCHAR2(20) COLLATE POLISH_CI); ALTER TABLE app1.big_tab1 ADD (COL4 CHAR2(32) COLLATE BIMARY); ALTER TABLE app1.big_tab1 ADD (COL5 NUMBER); New behavior: ALTER TABLE app1.big_tab1 ADD ( COL3 VARCHAR2(20) COLLATE POLISH_CI, COL4 CHAR(32) COLLATE BINARY, COL5 NUMBER); DECLARE … DBMS_METADATA.SET_TRANSFORM_PARAM (th, 'BATCH_ALTER_DDL', TRUE); / ALTER TABLE "APP1"."TEST1" ADD ("Y" VARCHAR2(40), "T" VARCHAR2(30), "Z" DATE) ALTER TABLE "APP1"."TEST1" RENAME TO "TEST2" Oracle Database 18c: New Features for Administrators
88
Unicode 9.0 Support Unicode is an evolving standard.
Unicode 9.0 was recently released: 11 new code blocks 7,500 new characters 6 new language scripts 72 new emoji characters Oracle Database 12c R2 ( ) has been updated to use the standard. Both AL32UTF8 and AL16UTF16 are updated. Oracle Globalization Development Kit (GDK) for Java is updated. Unicode 9.0 adds a total of 7500 characters. It also includes a few other important updates on the core specification as well as standard annexes and technical standards. For a complete list of the changes, refer to the Unicode Consortium website at: The new language scripts and characters add support for lesser-used languages worldwide: Osage, a Native American language Nepal Bhasa, a language of Nepal Fulani and other African languages The Bravanese dialect of Swahili, used in Somalia The Warsh orthography for Arabic, used in North and West Africa Tangut, a major historic script of China Note: An emoji is a small digital image or icon used to express an idea or emotion. The origin of the word is Japanese, where ‘e’ stands for ‘picture’ and moji for ‘letter, character’. Oracle Database 18c: New Features for Administrators
89
Summary In this lesson, you should have learned how to:
Manage private temporary tables Use the Data Pump Import CONTINUE_LOAD_ON_FORMAT_ERROR option of the DATA_OPTIONS parameter Perform online modification of partitioning and subpartitioning strategy Perform online MERGE partition and subpartition Generate batched DDL by using the DBMS_METADATA package Benefit from Unicode 9.0 support Oracle Database 18c: New Features for Administrators
90
Introduction Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Conclusion 1 2 3 4 5 6 7
91
Objectives After completing this lesson, you should be able to:
Configure and use Automatic In-Memory Configure the window capture of In-Memory expressions Describe the Memoptimized Rowstore feature and use in-memory hash index structures Describe the new SQL Tuning Set package Describe the concurrency of SQL execution of SQL Performance Analyzer tasks Describe SQL Performance Analyzer result set validation Oracle Database 18c: New Features for Administrators
92
In-Memory Column Store: Dual Format of Segments in SGA
UPDATE SELECT Buffer Cache IM column store TX Journal INMEMORY_SIZE Oracle Database 12c introduces the In-Memory Database option. The option allows the DBA to allocate space to the IM column store in SGA. The DBA turns on the INMEMORY attribute at object creation time or when altering an existing object to convert it to an in- memory candidate. An in-memory table gets in-memory column units (IMCUs) allocated in the IM column store at first table data access or at database startup. An in-memory copy of the table is made by performing a conversion from the on-disk format to the new in-memory columnar format. This conversion is done each time the instance restarts because the IM column store copy resides only in memory. When this conversion is done, the in-memory version of the table gradually becomes available for queries. If a table is partially converted, queries are able to use the partial in-memory version and go to disk for the rest, rather than waiting for the entire table to be converted. There is a new background process, IMCO, which creates and refreshes IMCUs to populate and repopulate the IM column store. IMCO is the background coordinator process, which schedules the objects to be populated or repopulated in the IM column store. Wnnn is a background process that actually populates the objects in memory. When rows in the table are updated, the corresponding entries in the IMCUs are marked stale. The row version that is recorded in the journal is constructed from the buffer cache, and is unaffected by the subsets of columns that are in the IMCU. IMCU synchronization is performed by the IMCO/Wnnn background processes with the updated rows populated in the transaction journal based on events: An internal threshold, including a number of invalidations to the rows per IMCU Transaction journal running low on memory RAC invalidations Threshold / low mem DBWn user IMCO Wnnn First data access or Instance startup SGA CREATE TABLE … INMEMORY ALTER TABLE … INMEMORY Row format Oracle Database 18c: New Features for Administrators
93
Deploying the In-Memory Column Store
Verify the database compatibility value. Configure the IM column store size. You can dynamically increase the IM column store size. COMPATIBLE = INMEMORY_SIZE = 100G No special installation is required to set up the feature because it is shipped with Oracle Database 12c. The database compatibility must be set to , , or later. Configure the IM column store size by setting the INMEMORY_SIZE instance parameter. Use the K, M, or G letters to define the amount unit. Since Oracle Database 12c Release 2, the size of the in-memory area can be dynamically increased after instance startup but not decreased. The memory allocated to the area is deducted from the total available memory for SGA_TARGET. There is no LRU algorithm to manage the in-memory objects. In-memory objects may be partially populated into the IM column store if there is not enough space to accommodate the entire object. When this object is queried, as much of the data from the column store is retrieved and the rest is retrieved either from the buffer cache, flash cache, or disk. The DBA can set priorities on objects to define which in-memory objects should be populated in the IM column store. Set the INMEMORY_SIZE parameter to a minimum of 128M and logically to at least the sum of the estimated size of in- memory tables. The parameter can be set per-PDB to limit the maximum size used by each PDB. Note that the sum of the per-PDB values does not necessarily have to be equal to the CDB value. It may even be greater. SQL> ALTER SYSTEM SET inmemory_size = 110g scope=both; Oracle Database 18c: New Features for Administrators
94
Setting In-Memory Object Attributes
Enable or disable objects to be populated into the IM column store. IMCUs are initialized and populated at query access time only. IMCUs can be initialized when the database is opened. Use MEMCOMPRESS to define the compression level. SQL> CREATE TABLE large_tab (c1 …) INMEMORY; Dual format SQL> ALTER TABLE t1 INMEMORY ; Dual format Turn on the INMEMORY attribute at object creation or when you alter it, to convert the object into a columnar representation in the IM column store. All columns of the in-memory table are populated into memory unless some columns are disabled by using the NO INMEMORY clause. It is recommended to specify all columns simultaneously rather than having an ALTER TABLE for each column, because it is more efficient. Two INMEMORY subattributes define the following behaviors: The loading priority of the object data in the IM column store: The INMEMORY clause can be have the PRIORITY subclause. An in-memory table is populated into memory at first data access by default. This default behavior is the “on demand” behavior. Using different priority levels, table data can be populated into the IM column store soon after the database starts up. The degree of compression of the columns of an object in the IM column store: The INMEMORY clause can be have the MEMCOMPRESS subclause. The segments that are compatible with the INMEMORY attribute are tables, partitions, subpartitions, inline LOBs, materialized views, materialized join views, and materialized view logs. Clustered tables and IOTs are not supported with the INMEMORY clause. SQL> ALTER TABLE sales NO INMEMORY; Row format only SQL> CREATE TABLE test (…) INMEMORY PRIORITY CRITICAL; SQL> ALTER TABLE t1 INMEMORY MEMCOMPRESS FOR CAPACITY HIGH; Oracle Database 18c: New Features for Administrators
95
Managing Heat Map and Automatic Data Optimization Policies
1 2 Enable Heat Map in PDB HEAT_MAP=ON Heat Map statistics collected on segments in PDB Memory V$HEAT_MAP_SEGMENT Real Time select * from EMP; update DEPT…; DBA_HEAT_MAP_SEG_HISTOGRAM view HEAT_MAP_STAT$ table MMON 3 4 5 Window Oracle Database 12c enables the automation of Information Lifecycle Management (ILM) actions by: Collecting heat map statistics that track segment and block data usage and segment-level usage frequencies in addition to daily aggregate usage statistics. Creating Automatic Data Optimization (ADO) policies that define conditions when segments should be moved to other tablespaces and/or when segments/blocks can be compressed. The first operation for the DBA is to enable heat map at the PDB level, tracking activity on blocks and segments. The heat map activates system-generated statistics collection, such as segment access and row and segment modification. Real-time statistics are collected in memory (V$HEAT_MAP_SEGMENT view) and regularly flushed by scheduled DBMS_SCHEDULER jobs to the persistent table HEAT_MAP_STAT$. Persistent data is visible by using the DBA_HEAT_MAP_SEG_HISTOGRAM view. The next step is to create ADO policies in the PDB on segments or groups of segments or as default ADO behavior on tablespaces. The next step is to schedule when ADO policy evaluation must happen if the default scheduling does not match business requirements. ADO policy evaluation relies on heat map statistics. MMON evaluates row-level policies periodically and starts jobs to compress whichever blocks qualify. Segment-level policies are evaluated and executed only during the maintenance window. The DBA can then view ADO execution results by using the DBA_ILMEVALUATIONDETAILS and DBA_ILMRESULTS views in the PDB. Finally, the DBA can verify if the segment in the PDB is moved and stored on the tablespace that is defined in the ADO policy and/or if blocks or the segment was compressed, by viewing the COMPRESSION_STAT$ table. Create ADO Policy on table ADO Policy evaluated ADO action executed EMP compressed EMP If no access during 3 days COMPRESS (pol1) If tablespace TBSEMP FULL Move EMP to another tablespace (pol2) No access since 3 days COMPRESS (pol1) TBSEMP not FULL yet No movement (pol2) 6 View ADO results COMPRESSION_STAT$ table Oracle Database 18c: New Features for Administrators
96
Creating ADO In-Memory Policies
Types of ADO In-Memory policies based on heat map statistics: Define policy to set IM attribute. Define policy to unset IM attribute Eviction from IM column store. Define policy to modify IM compression. Define an anticipated time for evicting IM segments from the IM column store: HEAT_MAP = ON SQL> CREATE TABLE app.emp (c number) INMEMORY; SQL> ALTER TABLE app.emp ILM ADD POLICY NO INMEMORY SEGMENT AFTER 10 DAYS OF NO ACCESS; In Oracle Database 12c, without any Automatic Data Optimization (ADO) policies defined on an in-memory segment, a segment that is populated in the IM column store is removed only if the segment is dropped, moved, or the INMEMORY attribute on the segment is removed. This behavior can result in memory pressure if the size of the data to be loaded into memory is more than the free space available in the IM column store. The performance of the user workload would be optimal if the IM column store contains the most frequently queried segments. Oracle Database 12c later introduced three types of ADO In-Memory policies: An ADO policy to set the INMEMORY attribute on an object. This type of policy allows specification of an IM clause as part of the ADO policy clause and annotates the table or partition with this IM clause when the policy condition is satisfied. It does not populate the segment to the IM store; the segment gets populated based on the priority in the IM clause. SQL> ALTER TABLE t1 ILM ADD POLICY SET INMEMORY AFTER 5 days OF creation; An ADO policy for the anticipated length of inactivity (NO ACCESS or MODIFICATION) that would indicate eviction of the object from the IM column store. The ADO policy considers heat map statistics. The object is kept in the IM column store as long as the activity does not subside. Eviction unsets the INMEMORY attribute on the object. An ADO policy to modify IM compression: Change the compression level of an object from a lower level of compression to a higher level. SQL> ALTER TABLE t1 ILM ADD POLICY MODIFY INMEMORY MEMCOMPRESS FOR QUERY HIGH AFTER 10 days OF no access; Create ADO Policy on table ADO Policy evaluated ADO action executed EMP If no access since 10 days Eviction from IM Column Store (pol1) No access since 10 days Eviction (pol1) EMP evicted V$IM_ADOTASKS DBA_ ILMDATAMOVEMENTPOLICIES V$IM_ADOTASKDETAILS Oracle Database 18c: New Features for Administrators
97
Automatic In-Memory: Overview
Before Automatic In-Memory was introduced, the DBA had to: Define when in-memory segments should be populated into the IM column store Define ADO IM policies to evict / populate IM segments from or into the IM column store AIM automates the management of the IM column store by using heat map statistics: Ensures that the “working data set” is in the IM column store at all times Moves IM segments in and out of the IM column store Benefits: Automatic actions: Makes the management of the IM column store easier Automatic eviction: Increases effective IM column store capacity Improved performance: Keeps as much of the working data set in memory as possible Oracle Database 18c introduces the Automatic In-Memory (AIM) feature. The benefits of configuring Automatic In-Memory (AIM) are: Ease of management of the IM store: Management of the IM column store for reducing memory pressure by eviction of cold IM segments involves significant user intervention. AIM addresses these issues with minimal user intervention. Improved performance: AIM ensures that the “working data set” is in the IM column store at all times. The working data set is a subset of all the IM enabled segments that is actively queried at any time. The working data set is expected to change with time for many applications. The working data set (or actively queried IM segments) contains a hot portion that is active and a cold portion that is not active. For data ageing applications, the action would be to remove cold IMCUs from the IM column store. With AIM, the DBA need not define IM priority attributes or ADO IM policies on IM segments. AIM automatically reconfigures the IM column store by evicting cold data out of the IM column store and populating the hot data. The unit of data eviction and population is an on-disk segment. AIM uses the heat map statistics of IM-enabled segments together with user-specified configurations to decide the set of objects to evict under memory pressure. Oracle Database 18c: New Features for Administrators
98
AIM Action Increase the effective capacity of the IM column store by evicting inactive IM segments with priority NONE from the IM column store under memory pressure. Evict at segment level: According to the amount of time that an IM segment has been inactive According to the window of time used by AIM to determine the statistics for decision- making Populate hot data. Note: ADO IM policies override AIM considerations. AIM automatically performs segment level actions. Segment-level population prioritizes the population of active data segments to the IM column store under memory pressure based on workload patterns. Segment-level eviction increases the effective capacity of the IM column store by evicting inactive IM segments with priority NONE from the IM column store under memory pressure. The eviction relies on the amount of time that an IM segment has been inactive in order for AIM to consider the segment for eviction. Eviction also relies on the time window used by AIM to determine the statistics for decision-making. In case ADO IM policies exist, they override AIM actions. Oracle Database 18c: New Features for Administrators
99
Configuring Automatic In-Memory
Activate heat map statistics: Set the initialization parameter: Use the DBMS_INMEMORY_ADMIN.AIM_SET_PARAMETER procedure to configure the sliding stats window in days: Use the DBMS_INMEMORY_ADMIN.AIM_GET_PARAMETER procedure to get the current values of the AIM parameters. SQL> ALTER SYSTEM SET heat_map = ON; SQL> ALTER SYSTEM SET INMEMORY_AUTOMATIC_LEVEL = MEDIUM SCOPE = BOTH; SQL> EXEC dbms_inmemory_admin.aim_set_parameter (- parameter => dbms_inmemory_admin.AIM_STATWINDOW_DAYS , - value => 1) The new INMEMORY_AUTOMATIC_LEVEL initialization parameter makes the IM column store self-managed eventually. However, limited controls are needed to modify the behavior of this feature and to disable it if necessary. You can turn on or off the automatic management of the IM column store by using one of the possible values: LOW: When under memory pressure, the database evicts cold segments from the IM column store. This is the default value. MEDIUM: In Oracle Database 12c, an in-memory table is populated into memory at first data access by default. This default behavior is the “on demand” behavior. Using different priority levels, table data can be populated into the IM column store soon after the database starts up. In Oracle Database 18c, this AIM level includes an additional optimization that prioritizes population of segments under memory pressure rather than allowing on-demand population. This level ensures that any hot segment that was not populated because of memory pressure is populated first. OFF: This option disables AIM, returning the IM column store to its Oracle Database 12c Release 2 behavior. Oracle recommends that you provision enough memory for the working data set to fit in the IM column store. As a general rule, AIM requires an additional 5 KB multiplied by the number of INMEMORY segments of SGA memory. For example, if 10,000 segments have the INMEMORY attribute, then reserve 50 MB of the IM column store for AIM. AIM uses the new DBMS_INMEMORY_ADMIN.AIM_SET_PARAMETER procedure to set the duration to filter heat map statistics for IM-enabled objects as part of its decision algorithms. The constants are used to populate the SYS.ADO_IMPARAM$ table. The default value for the sliding stats window in days is: AIM_STATWINDOW_DAYS_DEFAULT := 31 Oracle Database 18c: New Features for Administrators
100
Diagnostic Views What are the decisions and actions made by AIM?
Tracks decisions made by AIM at a point in time Provides information about the options considered and the decisions made DBA_INMEMORY_AIMTASKS STATE = RUNNING | UNKNOWN | DONE V$IM_ADOTASKS STATUS = RUNNING | UNKNOWN | DONE V$IM_ADOTASKS: This view provides information about AIM tasks. An AIM IM task provides a way to track decisions made by AIM at a point in time. The STATUS column describes the current state of the task, RUNNING, UNKNOWN, or DONE. DBA_INMEMORY_AIMTASKS: This view provides information on AIM IM tasks to database administrators. The view columns are identical to the V$IM_ADOTASKS view with an extra column, IM_SIZE, which corresponds to the in- memory size at the time of task creation. V$IM_ADOTASKDETAILS: The database investigates various possible actions as part of an AIM task. This view provides information about the options considered and the decisions made (ACTION column). DBA_INMEMORY_AIMTASKDETAILS: This is a view that provides the database administrator with details related to the AIM task actions, and particularly the AIM action decided for this object. For details about these new Oracle Database 18c views, refer to the Oracle Database In-Memory Guide 18c. DBA_INMEMORY_AIMTASKDETAILS ACTION = EVICT | NO ACTION | PARTIAL POPULATE V$IM_ADOTASKDETAILS ACTION = EVICT | NO ACTION | PARTIAL POPULATE Oracle Database 18c: New Features for Administrators
101
Populating In-Memory Expression Results
DBA_EXPRESSION_STATISTICS SNAPSHOT = LATEST | CUMULATIVE [ WINDOW Two types of IMEs: optimizer-defined expressions and user-defined virtual columns Query performance improved by caching: Results of frequently evaluated query expressions Results of user-defined virtual columns Results stored in IM expression units (IMEUs) for subsequent reuse Candidates detected by and eligible according to expression statistics INMEMORY_EXPRESSIONS_USAGE = ENABLE INMEMORY_VIRTUAL_COLUMNS = ENABLE IMEU1 IMEU2 T table a*b k-4 (a*b) / (k-4) e/f c=1 and e/f=10 Automatically identifying frequently used complex expressions or calculations, and then storing their results in the IM column store can improve query performance. Storing precomputed virtual column results can also significantly improve query performance by avoiding repeated evaluations. The cached results can range from function evaluations on columns used in application, scan, or join expressions, to bit-vectors derived during predicate evaluation for in-memory scans. Caching can also address other internal computations that are not explicitly recited in a database query, such as hash value computations for join operations. Where are the in-memory expressions and virtual column results (IMEs) stored? An IMCU is a basic unit of the in-memory copy of the table data. Each IMCU has its own in-memory expression unit (IMEU), which contains expression results corresponding to the rows stored in that IMCU. Why are expressions and virtual columns considered good IME candidates? Statistics such as frequency of execution and cost of evaluation on a per-segment basis are regularly maintained by the optimizer and stored in the Expression Statistics Store (ESS). ESS uses an LRU algorithm to automatically track which expressions are most frequently used. In Oracle Database 12c, the DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS procedure identifies the most frequently accessed (hottest) expressions in the database in the specified time range, materializes them as hidden virtual columns, and adds them to their respective tables during the next repopulation. The time range can be defined as: CUMULATIVE: The database considers all expression statistics since the creation of the database. CURRENT: The database considers only expressions statistics from the past 24 hours. Oracle Database 18c introduces the new WINDOW time range. 12c SQL> exec DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS ('CURRENT') 18c SQL> exec DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS ('WINDOW') Oracle Database 18c: New Features for Administrators
102
Populating In-Memory Expression Results Within a Window
Open a window: Let workload run. Optionally, get the current capture state of the expression capture window and the time stamp of the most recent modification. Close the window: Populate all the hot expressions captured in the window into the IM column store: SQL> exec DBMS_INMEMORY_ADMIN.IME_OPEN_CAPTURE_WINDOW() You can define an expression capture window of an arbitrary length, which ensures that only the expressions occurring within this window are considered for in-memory materialization. This mechanism is especially useful when you know of a small interval that is representative of the entire workload. For example, during the trading window, a brokerage firm can gather the set of expressions, and materialize them in the IM column store to speed-up future query processing for the entire workload. To populate expressions tracked in the most recent user-specified expression capture window, perform the following steps: Open a window by invoking the DBMS_INMEMORY_ADMIN.IME_OPEN_CAPTURE_WINDOW procedure. Let the workload run until you think you have collected enough expressions. Close the window by invoking the DBMS_INMEMORY_ADMIN.IME_CLOSE_CAPTURE_WINDOW procedure. Add all the hot expressions captured in the previous window into the IM column store by invoking the DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS('WINDOW') procedure. You can get the current capture state of the expression capture window and the time stamp of the most recent modification by invoking the DBMS_INMEMORY_ADMIN.IME_GET_CAPTURE_STATE procedure. You can still invoke the DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS('CURRENT') procedure to add all the hot expressions captured in the past 24 hours, which includes WINDOW as well, and the DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS('CUMULATIVE') procedure to add all the hot expressions captured since the creation of the database. SQL> exec DBMS_INMEMORY_ADMIN.IME_GET_CAPTURE_STATE( P_CAPTURE_STATE, - P_LAST_MODIFIED) SQL> exec DBMS_INMEMORY_ADMIN.IME_CLOSE_CAPTURE_WINDOW() SQL> exec DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS ('WINDOW') Oracle Database 18c: New Features for Administrators
103
Memoptimized Rowstore
18c Fast ingest and query rates for thousands of devices from the Internet requires: High-speed streaming of single-row inserts Very fast lookups to key-value type data in the database buffer cache Querying data with the PRIMARY KEY integrity constraint enabled Using a new in-memory hash index structure Accessing table rows permanently pinned in the buffer cache Aggregated and streamed data to the database through the trusted clients Smart devices connected to the Internet that have the ability to send and receive data require support for fast ingest and query rates for thousands of devices. The Memoptimized Rowstore feature is meant to provide high-speed streaming of single-row inserts and very fast lookups to key-value type data. The feature works only on tables that have PRIMARY KEY integrity constraint enabled. To provide the speed necessary to service thousands of devices, the data is aggregated and streamed to the database through the trusted clients. The fast query part of the Memoptimized Rowstore feature allows access to existing rows through a new hash index structure and pinned database blocks. Oracle Database supports ingest and access of row-based data in a fraction of the time that it takes for conventional SQL transactions. With the ability to ingest high-speed streaming of input data and the use of innovative protocols and hash indexing of key-value pairs for lookups, the Memoptimized Rowstore feature significantly reduces transaction latency and overhead, and enables businesses to deploy thousands of devices to monitor and control all aspects of their business. Oracle Database 18c: New Features for Administrators
104
In-Memory Hash Index Hash index maps a given key to the address of rows in the database buffer cache: Gets the address of the row in the buffer cache Reads the row from the buffer cache Enable tables for MEMOPTIMIZE FOR READ Does not change table on-disk structures Does not require application code change Populates the hash index for an object with DBMS_MEMOPTIMIZE.POPULATE procedure Database Buffer Cache Additional memory: MEMOPTIMIZE_POOL_SIZE = 100M OE.T table Hash Index Block x Key value 1 to row1 address x Key value 2 to row2 address x Key value 3 to row3 address Values map keys in blocks Key 1: row1 data Key 2: row2 data Key 3: row3 data An in-memory hash table mapping a given key to the location of corresponding rows enables quick access of the Oracle data block storing the row. The in-memory hash table is indexed with a user-specified primary key, very similar to hash clusters containing tables with PRIMARY KEY constraint enabled. This in-memory structure is called a hash index, although the underlying data structure is a hash table. The data structure resides in the instance memory, requiring additional space in SGA. You can set the MEMOPTIMIZE_POOL_SIZE initialization parameter to reserve static SGA allocation at instance startup. To build a fast code path, having an in-memory hash index data structure is not sufficient. Rows of tables are stored in disk blocks and, when row data is queried, the database buffer cache caches tables blocks in the SGA of the database instance. Given that the blocks are aged out based on the replacement policy used by the buffer cache, the blocks have to be permanently pinned in the buffer cache to avoid disk I/O. This is the reason for setting the MEMOPTIMIZE FOR READ attribute to a table that does not change the on-disk structure. Using the DDL command ALTER TABLE t MEMOPTIMIZE FOR READ cascades the attribute to all existing partitions (and sub-partitions). Use the NO MEMOPTIMIZE FOR READ clause to disable the feature on an object. By default, tables are MEMOPTIMIZE FOR READ disabled. Setting the MEMOPTIMIZE FOR WRITE attribute to a table gives a primary key and inserts the key and corresponding data and metadata in the hash index structure during row inserts. The hash index structure is updated during other write operations like delete and update. Use the DBMS_MEMOPTIMIZE.POPULATE procedure to populate the hash index for an object, table, partition or subpartition. SQL> exec DBMS_MEMOPTIMIZE.POPULATE (SCHEMA_NAME => 'SH', table_name => 'SALES', PARTITION_NAME => 'SALES_Q3_2003') DBA_TABLES DBA_TAB_PARTITIONS DBA_TAB_SUBPARTITIONS DBA_OBJECT_TABLES MEMOPTIMIZE_READ = ENABLED MEMOPTIMIZE_WRITE = DISABLED SQL> ALTER TABLE oe.t MEMOPTIMIZE FOR READ; Oracle Database 18c: New Features for Administrators
105
DBMS_SQLTUNE Versus DBMS_SQLSET Package
SQL tuning task management: Create / drop tuning task. Execute tuning task. Display advisor recommendations. SQL Profile management: Accept SQL profile. Drop / alter SQL profile. Manipulate staging tables. STS management: Create / drop STS. Populate STS. Query STS content. DBMS_SQLTUNE DBMS_SQLSET In Oracle Database 12c, to perform manual and automatic tuning of statements, and management of SQL profiles and SQL Tuning Sets (STS), you can use the DBMS_SQLTUNE package that contains the necessary APIs. In Oracle Database 18c, the DBMS_SQLSET package is the new package that contains SQL Tuning Set functionality. 12c 18c
106
SQL Tuning Sets: Manipulation
SQL Tuning Set functionality is available only if one of the following conditions exist: Tuning Pack is enabled. Real Application Testing (RAT) option is installed. SQL Tuning Set functionality is available for free with Oracle DB Enterprise Edition. A new DBMS_SQLSET package is available to create, edit, drop, populate, and query STS and manipulate staging tables. The new package is not part of the tuning pack or RAT option. 12c 18c In Oracle Database 12c, the package containing the SQL Tuning Set functionality is DBMS_SQLTUNE that is part of the tuning pack or Real Application Testing option. In Oracle Database 18c, the DBMS_SQLSET package is the new package to contain the SQL Tuning Set functionality. Create and drop STS: CREATE_SQLSET, DROP_SQLSET Populate STS: CAPTURE_CURSOR_CACHE, LOAD_SQLSET Query STS content: SELECT_SQLSET function Manipulate staging tables: CREATE_STGTAB, PACK_STGTAB, UNPACK_STGTAB, REMAP_STGTAB The new package is not part of the Tuning Pack nor the Real Application Testing option. It is available for free with Oracle Database Enterprise Edition. Most of the functions and procedures of the DBMS_SQLTUNE package can be found in the new DBMS_SQLSET package, except the procedures related to profiles (ACCEPT_SQL_PROFILE), tuning tasks, and baselines. SQL> EXEC dbms_sqlset.create_sqlset | delete_sqlset | update_sqlset | drop_sqlset SQL> EXEC dbms_sqlset.capture_cursor_cache | load_sqlset SQL> EXEC dbms_sqlset.create_stgtab | pack_stgtab | unpack_stgtab | remap_stgtab Oracle Database 18c: New Features for Administrators
107
SQL Performance Analyzer
Targeted users: DBAs, QAs, application developers Helps predict the impact of system changes on SQL workload response time: Database upgrades Implementation of tuning recommendations Schema / database parameter changes Statistics gathering OS and hardware changes Builds different versions of SQL workload performance (SQL execution plans and execution statistics) Re-executes SQL statements serially or concurrently Analyzes performance differences Offers fine-grained performance analysis on individual SQL The Oracle Real Application Testing option in Oracle Database 12c includes SQL Performance Analyzer (SQLPA), which gives you an accurate assessment of the impact of change on the SQL statements that make up the workload. SQLPA helps you forecast the impact of a potential change on the performance of a SQL query workload. SQLPA is used to predict and prevent potential performance problems for any database environment change like database upgrades, schema or parameter changes, or statistics gathering change that affects the structure of the SQL execution plans. This capability provides DBAs with detailed information about the performance of SQL statements, such as before-and-after execution statistics, and statements with performance improvement or degradation. This enables you to make changes in a test environment to determine whether the workload performance will be improved through a database upgrade. In Oracle Database 12c, when a SQLPA task is executed for analysis, each statement in the SQL Tuning Set (STS) is executed one after the other, sequentially. Depending on the number of statements stored in the STS and their complexity, the execution might experience long running times. In an STS, each statement is independent of each other. This makes it possible to concurrently execute the statements in an STS. Oracle Database 18c allows the concurrent execution of statements in an STS. You can choose the execution mode for an SPA task to concurrently execute STS statements and define the degree of parallelism (DOP) to be used during SPA task execution. 12c 18c Oracle Database 18c: New Features for Administrators
108
Using SQL Performance Analyzer
Capture SQL workload on production. Transport the SQL workload to a test system. Build “before-change” performance data. Make changes. Build “after-change” performance data. Compare results from steps 3 and 5. Tune regressed SQL. Using SQL Performance Analyzer Gather SQL: In this phase, you collect the set of SQL statements that represent your SQL workload on the production system. Transport: You must transport the resultant workload to the test system. The STS is exported from the production system and the STS is imported into the test system. Compute “before-version” performance: Before any changes take place, you execute the SQL statements, collecting baseline information that is needed to assess the impact that a future change might have on the performance of the workload. Make a change: When you have the before-version data, you can implement your planned change and start viewing the impact on performance. Compute “after-version” performance: This step takes place after the change is made in the database environment. Each statement of the SQL workload runs under a mock execution (collecting statistics only), collecting the same information as captured in step 3. Compare and analyze SQL Performance: When you have both versions of the SQL workload performance data, you can carry out the performance analysis by comparing the after-version data with the before-version data. Tune regressed SQL. Oracle Database 18c: New Features for Administrators
109
Steps 6-7: Comparing / Analyzing Performance and Tuning Regressed SQL
Rely on user-specified metrics to compare SQL performance. Calculate impact of change on individual SQLs and SQL workload. Use SQL execution frequency to define a weight of importance. Detect improvements, regressions, and unchanged performance. Detect changes in execution plans. Validate that the same result set was returned during the initial SPA test and during subsequent tests. Recommend running SQL Tuning Advisor to tune regressed SQLs. SQL Tuning Advisor Improvement Regression After re-executing the SQL statements, you compare and analyze before and after performance, based on the execution statistics, such as elapsed time, CPU time, and buffer gets. In Oracle Database 18c, SQL Performance Analyzer (SPA) result set validation allows users to validate that the same result set is returned during the initial SPA test-execute and during subsequent test-executes. It assures you that repeated SQL queries are executing as expected and is required in certain regulatory environments. If the result set returned by a query is different before and after the change, it is most likely due to a bug in the SQL execution layer. Because this can have a severe impact on SQL, it is desirable for SPA to be able to detect such issues and report them. You can ensure the result set validation by setting the COMPARE_RESULTSET parameter. Compare analysis Database instance 18c SQL> EXEC dbms_sqlpa.set_analysis_task_parameter(:atname,- 'COMPARE_RESULTSET', 'FALSE') Test database Oracle Database 18c: New Features for Administrators
110
SQL Performance Analyzer: PL/SQL Example
Create the tuning task: Set task execution parameters: Execute the task to build the before-change performance data: Produce the before-change report: EXEC :tname:= dbms_sqlpa.create_analysis_task( sqlset_name => 'MYSTS', task_name => 'MYSPA') EXEC dbms_sqlpa.set_analysis_task_parameter( :tname, 'TEST_EXECUTE_DOP', 4) The example in the slide shows you how to use the DBMS_SQLPA package to invoke SQL Performance Analyzer to access the SQL performance impact of some changes. Create the tuning task to run SQL Performance Analyzer. Set the degree of concurrency for re-executing the statements with the TEST_EXECUTE_DOP parameter. Execute the task once to build before-change performance data. You can specify various parameters, for example, the EXECUTION_TYPE parameter as follows: EXPLAIN PLAN to generate explain plans for all SQL statements in the SQL workload TEST EXECUTE to execute all SQL statements in the SQL workload. The procedure executes only the query part of the DML statements to prevent side-effects to the database or user data. When TEST EXECUTE is specified, the procedure generates execution plans and execution statistics. COMPARE [PERFORMANCE] to analyze and compare two versions of SQL performance data CONVERT SQLSET to read the statistics captured in a SQL Tuning Set and model them as a task execution Produce the before-change report (special settings for report: set long , longchunksize , and linesize 90). EXEC dbms_sqlpa.execute_analysis_task(task_name => :tname, - execution_type => 'TEST EXECUTE', execution_name => 'before') SELECT dbms_sqlpa.report_analysis_task(task_name => :tname, type=>'text', section=>'summary') FROM dual; Oracle Database 18c: New Features for Administrators
111
SQL Performance Analyzer: PL/SQL Example
After making your changes: Create the after-change performance data: Generate the after-change report: Set task comparison parameters and compare the task executions: Generate the analysis report: EXEC dbms_sqlpa.execute_analysis_task(task_name => :tname, - execution_type => 'TEST EXECUTE', execution_name => 'after') Make your changes and execute the task again after making the changes. Generate the after-changes report. Compare the two executions. You can set the COMPARE_RESULTSET parameter to TRUE to validate that the result set returned during the initial SPA test-execute is identical to the result during subsequent test-executes. Generate the analysis report. Note: For more information about the DBMS_SQLPA package, see the Oracle Database PL/SQL Packages and Types Reference Guide. SELECT dbms_sqlpa.report_analysis_task(task_name => :tname, type=>'text', section=>'summary') FROM dual; EXEC dbms_sqlpa.set_analysis_task_parameter(:tname, 'COMPARE_RESULTSET', 'TRUE') EXEC dbms_sqlpa.execute_analysis_task(task_name => :tname, execution_type => 'COMPARE PERFORMANCE') SELECT dbms_sqlpa.report_analysis_task(task_name => :tname, type=>'text', section=>'summary') FROM dual; Oracle Database 18c: New Features for Administrators
112
Summary In this lesson, you should have learned how to:
Configure and use Automatic In-Memory Configure the window capture of In-Memory expressions Describe the Memoptimized Rowstore feature and use in-memory hash index structures Describe the new SQL Tuning Set package Describe the concurrency of SQL execution of SQL Performance Analyzer tasks Describe SQL Performance Analyzer result set validation Oracle Database 18c: New Features for Administrators
113
Introduction Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Conclusion 1 2 3 4 5 6 7
114
By viewing this presentation, you should be able to:
Understand Multitenant Enhancements Discuss Security Management Upgrades Understand Rman Upgrades Explain General Database Enhancements Understand Performance Enhancements Oracle Database 18c: New Features for Administrators
115
Thank You We would like to thank you for taking the time and attending our presentation MySQL for Database Administrators
116
Experience Oracle University Learning Subscriptions. Visit education
Experience Oracle University Learning Subscriptions! Visit education.oracle.com/oowtrial Free Trial Subscription: Special invitation from Oracle University to attendees of Oracle OpenWorld or Code One Anytime, anywhere access Continually updated training on Oracle products and technologies. Experience the new Unlimited Product Learning Subscription Instructor notes Details: UPLS Trial: 1 subscription per attendee Ends December 21, 2018 or after attendee consumes 5 hours of learning on the trial subscription. Availability: Go to the education.oracle.com/oowtrial to activate your trial subscription. Oracle Confidential – Internal/Restricted/Highly Restricted
117
Are You Up For the Oracle University Zip Labs Challenge at Code One?
Join us in San Francisco, California at the Moscone West Center where you can compete to win a prize at the “Oracle University Zip Labs Challenge Booth.” When: Monday, October 26th – Open from 9:00am through 4:30pm Tuesday, October 27th – Open from 9:00am through 4:30pm Wednesday, October 28th – Open from 9:00am through 3:00pm What is the Oracle University Zip Labs Challenge? The Oracle University Zip Labs Challenge is a collection of labs, each minute long. Zip Labs guide you through a sequence of steps to accomplish a specific task within the Oracle Cloud Platform. It’s an opportunity to get started experiencing for yourself how some of Oracle’s new technologies work. You can select from labs in the categories covering: Virtual Machines: Creating a VM in OCI Autonomous Data Warehouse (ADW): Provisioning, Connecting to SQL, Machine Learning Autonomous Transaction Processing (ATP): Provisioning, Connecting to SQL, Scaling Great Learning. Great Technology. Great Prizes COME SEE WHAT ALL THE EXCITEMENT IS ABOUT AS YOU WORK THROUGH EXPERT DEVELOPED LABS AND CLIMB HIGHER ON OUR LEADERBOARD THROUGHOUT THE DAY – COMPETING WITH OTHER CONTESTANTS It’s simple to find us. Go to the 2nd floor of Moscone West. As you complete labs and quizzes, you’ll earn points to boost your leaderboard standing. At the end of each day, the top 5 winners win a fabulous prize. So if you are up for the challenge – then we hope you drop by to showcase your skills and curiosity! Looking forward to seeing you there. Confidential – Oracle Internal/Restricted/Highly Restricted
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.